id
int64
12
1.07M
title
stringlengths
1
124
text
stringlengths
0
228k
paragraphs
list
abstract
stringlengths
0
123k
date_created
stringlengths
0
20
date_modified
stringlengths
20
20
templates
list
url
stringlengths
31
154
9,461
Eric Raymond (disambiguation)
Eric S. Raymond (born 1957) is an American computer programmer and author. Eric Raymond may also refer to:
[ { "paragraph_id": 0, "text": "Eric S. Raymond (born 1957) is an American computer programmer and author.", "title": "" }, { "paragraph_id": 1, "text": "Eric Raymond may also refer to:", "title": "" } ]
Eric S. Raymond is an American computer programmer and author. Eric Raymond may also refer to: Eric Scott Raymond, American flight instructor and glider pilot Eric Raymond (Jem), a fictional character in the 1980s cartoon television show Jem
2015-03-12T00:40:53Z
[ "Template:Hndis" ]
https://en.wikipedia.org/wiki/Eric_Raymond_(disambiguation)
9,467
Longest word in English
The identity of the longest word in English depends on the definition of a word and of length. Words may be derived naturally from the language's roots or formed by coinage and construction. Additionally, comparisons are complicated because place names may be considered words, technical terms may be arbitrarily long, and the addition of suffixes and prefixes may extend the length of words to create grammatically correct but unused or novel words. Different dictionaries include and omit different words. The length of a word may also be understood in multiple ways. Most commonly, length is based on orthography (conventional spelling rules) and counting the number of written letters. Alternate, but less common, approaches include phonology (the spoken language) and the number of phonemes (sounds). The longest word in any of the major English language dictionaries is pneumonoultramicroscopicsilicovolcanoconiosis (45 letters), a word that refers to a lung disease contracted from the inhalation of very fine silica particles, specifically from a volcano; medically, it is the same as silicosis. The word was deliberately coined to be the longest word in English, and has since been used in a close approximation of its originally intended meaning, lending at least some degree of validity to its claim. The Oxford English Dictionary contains pseudopseudohypoparathyroidism (30 letters). Merriam-Webster's Collegiate Dictionary does not contain antidisestablishmentarianism (28 letters), as the editors found no widespread, sustained usage of the word in its original meaning. The longest word in that dictionary is electroencephalographically (27 letters). The longest non-technical word in major dictionaries is floccinaucinihilipilification at 29 letters. Consisting of a series of Latin words meaning "nothing" and defined as "the act of estimating something as worthless"; its usage has been recorded as far back as 1741. Ross Eckler has noted that most of the longest English words are not likely to occur in general text, meaning non-technical present-day text seen by casual readers, in which the author did not specifically intend to use an unusually long word. According to Eckler, the longest words likely to be encountered in general text are deinstitutionalization and counterrevolutionaries, with 22 letters each. A computer study of over a million samples of normal English prose found that the longest word one is likely to encounter on an everyday basis is uncharacteristically, at 20 letters. The word internationalization is abbreviated "i18n", the embedded number representing the number of letters between the first and the last. In his play Assemblywomen (Ecclesiazousae), the ancient Greek comedic playwright Aristophanes created a word of 171 letters (183 in the transliteration below), which describes a dish by stringing together its ingredients: Henry Carey's farce Chrononhotonthologos (1743) holds the opening line: "Aldiborontiphoscophornio! Where left you Chrononhotonthologos?" Thomas Love Peacock put these creations into the mouth of the phrenologist Mr. Cranium in his 1816 book Headlong Hall: osteosarchaematosplanchnochondroneuromuelous (44 characters) and osseocarnisanguineoviscericartilaginonervomedullary (51 characters). James Joyce made up nine 100-letter words plus one 101-letter word in his novel Finnegans Wake, the most famous of which is Bababadalgharaghtakamminarronnkonnbronntonnerronntuonnthunntrovarrhounawnskawntoohoohoordenenthurnuk. Appearing on the first page, it allegedly represents the symbolic thunderclap associated with the fall of Adam and Eve. As it appears nowhere else except in reference to this passage, it is generally not accepted as a real word. Sylvia Plath made mention of it in her semi-autobiographical novel The Bell Jar, when the protagonist was reading Finnegans Wake. "Supercalifragilisticexpialidocious", the 34-letter title of a song from the movie Mary Poppins, does appear in several dictionaries, but only as a proper noun defined in reference to the song title. The attributed meaning is "a word that you say when you don't know what to say." The idea and invention of the word is credited to songwriters Robert and Richard Sherman. The English language permits the legitimate extension of existing words to serve new purposes by the addition of prefixes and suffixes. This is sometimes referred to as agglutinative construction. This process can create arbitrarily long words: for example, the prefixes pseudo (false, spurious) and anti (against, opposed to) can be added as many times as desired. More familiarly, the addition of numerous "great"s to a relative, such as "great-great-great-great-grandparent", can produce words of arbitrary length. In musical notation, an 8192nd note may be called a semihemidemisemihemidemisemihemidemisemiquaver. Antidisestablishmentarianism is the longest common example of a word formed by agglutinative construction. A number of scientific naming schemes can be used to generate arbitrarily long words. The IUPAC nomenclature for organic chemical compounds is open-ended, giving rise to the 189,819-letter chemical name Methionylthreonylthreonyl . . . isoleucine for the protein also known as titin, which is involved in striated muscle formation. In nature, DNA molecules can be much bigger than protein molecules and therefore potentially be referred to with much longer chemical names. For example, the wheat chromosome 3B contains almost 1 billion base pairs, so the sequence of one of its strands, if written out in full like Adenilyladenilylguanilylcystidylthymidyl . . . , would be about 8 billion letters long. The longest published word, Acetylseryltyrosylseryliso . . . serine, referring to the coat protein of a certain strain of tobacco mosaic virus (P03575), is 1,185 letters long, and appeared in the American Chemical Society's Chemical Abstracts Service in 1964 and 1966. In 1965, the Chemical Abstracts Service overhauled its naming system and started discouraging excessively long names. In 2011, a dictionary broke this record with a 1909-letter word describing the trpA protein (P0A877). John Horton Conway and Landon Curt Noll developed an open-ended system for naming powers of 10, in which one sexmilliaquingentsexagintillion, coming from the Latin name for 6560, is the name for 10 = 10. Under the long number scale, it would be 10 = 10. Gammaracanthuskytodermogammarus loricatobaicalensis is sometimes cited as the longest binomial name—it is a kind of amphipod. However, this name, proposed by B. Dybowski, was invalidated by the International Code of Zoological Nomenclature in 1929 after being petitioned by Mary J. Rathbun to take up the case. Myxococcus llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogochensis is the longest accepted binomial name for an organism. It is a bacterium found in soil collected at Llanfairpwllgwyngyll (discussed below). Parastratiosphecomyia stratiosphecomyioides is the longest accepted binomial name for any animal, or any organism visible with the naked eye. It is a species of soldier fly. The genus name Parapropalaehoplophorus (a fossil glyptodont, an extinct family of mammals related to armadillos) is two letters longer, but does not contain a similarly long species name. Aequeosalinocalcalinoceraceoaluminosocupreovitriolic, at 52 letters, describing the spa waters at Bath, England, is attributed to Dr. Edward Strother (1675–1737). The word is composed of the following elements: The longest officially recognized place name in an English-speaking country is Taumatawhakatangihangakoauauotamateaturipukakapikimaungahoronukupokaiwhenuakitanatahu (85 letters), which is a hill in New Zealand. The name is in the Māori language. A widely recognized version of the name is Taumatawhakatangihangakoauauotamateaturipukakapikimaungahoronukupokaiwhenuakitanatahu (85 letters), which appears on the signpost at the location (see the photo on this page). In Māori, the digraphs ng and wh are each treated as single letters. In Canada, the longest place name is Dysart, Dudley, Harcourt, Guilford, Harburn, Bruton, Havelock, Eyre and Clyde, a township in Ontario, at 61 letters or 68 non-space characters. The 58-letter name Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch is the name of a town on Anglesey, an island of Wales. In terms of the traditional Welsh alphabet, the name is only 51 letters long, as certain digraphs in Welsh are considered as single letters, for instance ll, ng and ch. It is generally agreed, however, that this invented name, adopted in the mid-19th century, was contrived solely to be the longest name of any town in Britain. The official name of the place is Llanfairpwllgwyngyll, commonly abbreviated to Llanfairpwll or Llanfair PG. The longest non-contrived place name in the United Kingdom which is a single non-hyphenated word is Cottonshopeburnfoot (19 letters) and the longest which is hyphenated is Sutton-under-Whitestonecliffe (29 characters). The longest place name in the United States (45 letters) is Chargoggagoggmanchauggagoggchaubunagungamaugg, a lake in Webster, Massachusetts. It means "Fishing Place at the Boundaries – Neutral Meeting Grounds" and is sometimes facetiously translated as "you fish your side of the water, I fish my side of the water, nobody fishes the middle". The lake is also known as Webster Lake. The longest hyphenated names in the U.S. are Winchester-on-the-Severn, a town in Maryland, and Washington-on-the-Brazos, a notable place in Texas history. The longest single-word town names in the U.S. are Kleinfeltersville, Pennsylvania and Mooselookmeguntic, Maine. The longest official geographical name in Australia is Mamungkukumpurangkuntjunya. It has 26 letters and is a Pitjantjatjara word meaning "where the Devil urinates". Liechtenstein is the longest single-word country name in English, and the second-longest is Turkmenistan. Guinness World Records formerly contained a category for longest personal name used. Long birth names are often coined in protest of naming laws or for other personal reasons.
[ { "paragraph_id": 0, "text": "The identity of the longest word in English depends on the definition of a word and of length.", "title": "" }, { "paragraph_id": 1, "text": "Words may be derived naturally from the language's roots or formed by coinage and construction. Additionally, comparisons are complicated because place names may be considered words, technical terms may be arbitrarily long, and the addition of suffixes and prefixes may extend the length of words to create grammatically correct but unused or novel words. Different dictionaries include and omit different words.", "title": "" }, { "paragraph_id": 2, "text": "The length of a word may also be understood in multiple ways. Most commonly, length is based on orthography (conventional spelling rules) and counting the number of written letters. Alternate, but less common, approaches include phonology (the spoken language) and the number of phonemes (sounds).", "title": "" }, { "paragraph_id": 3, "text": "The longest word in any of the major English language dictionaries is pneumonoultramicroscopicsilicovolcanoconiosis (45 letters), a word that refers to a lung disease contracted from the inhalation of very fine silica particles, specifically from a volcano; medically, it is the same as silicosis. The word was deliberately coined to be the longest word in English, and has since been used in a close approximation of its originally intended meaning, lending at least some degree of validity to its claim.", "title": "Major dictionaries" }, { "paragraph_id": 4, "text": "The Oxford English Dictionary contains pseudopseudohypoparathyroidism (30 letters).", "title": "Major dictionaries" }, { "paragraph_id": 5, "text": "Merriam-Webster's Collegiate Dictionary does not contain antidisestablishmentarianism (28 letters), as the editors found no widespread, sustained usage of the word in its original meaning. The longest word in that dictionary is electroencephalographically (27 letters).", "title": "Major dictionaries" }, { "paragraph_id": 6, "text": "The longest non-technical word in major dictionaries is floccinaucinihilipilification at 29 letters. Consisting of a series of Latin words meaning \"nothing\" and defined as \"the act of estimating something as worthless\"; its usage has been recorded as far back as 1741.", "title": "Major dictionaries" }, { "paragraph_id": 7, "text": "Ross Eckler has noted that most of the longest English words are not likely to occur in general text, meaning non-technical present-day text seen by casual readers, in which the author did not specifically intend to use an unusually long word. According to Eckler, the longest words likely to be encountered in general text are deinstitutionalization and counterrevolutionaries, with 22 letters each.", "title": "Major dictionaries" }, { "paragraph_id": 8, "text": "A computer study of over a million samples of normal English prose found that the longest word one is likely to encounter on an everyday basis is uncharacteristically, at 20 letters.", "title": "Major dictionaries" }, { "paragraph_id": 9, "text": "The word internationalization is abbreviated \"i18n\", the embedded number representing the number of letters between the first and the last.", "title": "Major dictionaries" }, { "paragraph_id": 10, "text": "In his play Assemblywomen (Ecclesiazousae), the ancient Greek comedic playwright Aristophanes created a word of 171 letters (183 in the transliteration below), which describes a dish by stringing together its ingredients:", "title": "Creations of long words" }, { "paragraph_id": 11, "text": "Henry Carey's farce Chrononhotonthologos (1743) holds the opening line: \"Aldiborontiphoscophornio! Where left you Chrononhotonthologos?\"", "title": "Creations of long words" }, { "paragraph_id": 12, "text": "Thomas Love Peacock put these creations into the mouth of the phrenologist Mr. Cranium in his 1816 book Headlong Hall: osteosarchaematosplanchnochondroneuromuelous (44 characters) and osseocarnisanguineoviscericartilaginonervomedullary (51 characters).", "title": "Creations of long words" }, { "paragraph_id": 13, "text": "James Joyce made up nine 100-letter words plus one 101-letter word in his novel Finnegans Wake, the most famous of which is Bababadalgharaghtakamminarronnkonnbronntonnerronntuonnthunntrovarrhounawnskawntoohoohoordenenthurnuk. Appearing on the first page, it allegedly represents the symbolic thunderclap associated with the fall of Adam and Eve. As it appears nowhere else except in reference to this passage, it is generally not accepted as a real word. Sylvia Plath made mention of it in her semi-autobiographical novel The Bell Jar, when the protagonist was reading Finnegans Wake.", "title": "Creations of long words" }, { "paragraph_id": 14, "text": "\"Supercalifragilisticexpialidocious\", the 34-letter title of a song from the movie Mary Poppins, does appear in several dictionaries, but only as a proper noun defined in reference to the song title. The attributed meaning is \"a word that you say when you don't know what to say.\" The idea and invention of the word is credited to songwriters Robert and Richard Sherman.", "title": "Creations of long words" }, { "paragraph_id": 15, "text": "The English language permits the legitimate extension of existing words to serve new purposes by the addition of prefixes and suffixes. This is sometimes referred to as agglutinative construction. This process can create arbitrarily long words: for example, the prefixes pseudo (false, spurious) and anti (against, opposed to) can be added as many times as desired. More familiarly, the addition of numerous \"great\"s to a relative, such as \"great-great-great-great-grandparent\", can produce words of arbitrary length. In musical notation, an 8192nd note may be called a semihemidemisemihemidemisemihemidemisemiquaver.", "title": "Creations of long words" }, { "paragraph_id": 16, "text": "Antidisestablishmentarianism is the longest common example of a word formed by agglutinative construction.", "title": "Creations of long words" }, { "paragraph_id": 17, "text": "A number of scientific naming schemes can be used to generate arbitrarily long words.", "title": "Creations of long words" }, { "paragraph_id": 18, "text": "The IUPAC nomenclature for organic chemical compounds is open-ended, giving rise to the 189,819-letter chemical name Methionylthreonylthreonyl . . . isoleucine for the protein also known as titin, which is involved in striated muscle formation. In nature, DNA molecules can be much bigger than protein molecules and therefore potentially be referred to with much longer chemical names. For example, the wheat chromosome 3B contains almost 1 billion base pairs, so the sequence of one of its strands, if written out in full like Adenilyladenilylguanilylcystidylthymidyl . . . , would be about 8 billion letters long. The longest published word, Acetylseryltyrosylseryliso . . . serine, referring to the coat protein of a certain strain of tobacco mosaic virus (P03575), is 1,185 letters long, and appeared in the American Chemical Society's Chemical Abstracts Service in 1964 and 1966. In 1965, the Chemical Abstracts Service overhauled its naming system and started discouraging excessively long names. In 2011, a dictionary broke this record with a 1909-letter word describing the trpA protein (P0A877).", "title": "Creations of long words" }, { "paragraph_id": 19, "text": "John Horton Conway and Landon Curt Noll developed an open-ended system for naming powers of 10, in which one sexmilliaquingentsexagintillion, coming from the Latin name for 6560, is the name for 10 = 10. Under the long number scale, it would be 10 = 10.", "title": "Creations of long words" }, { "paragraph_id": 20, "text": "Gammaracanthuskytodermogammarus loricatobaicalensis is sometimes cited as the longest binomial name—it is a kind of amphipod. However, this name, proposed by B. Dybowski, was invalidated by the International Code of Zoological Nomenclature in 1929 after being petitioned by Mary J. Rathbun to take up the case.", "title": "Creations of long words" }, { "paragraph_id": 21, "text": "Myxococcus llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogochensis is the longest accepted binomial name for an organism. It is a bacterium found in soil collected at Llanfairpwllgwyngyll (discussed below). Parastratiosphecomyia stratiosphecomyioides is the longest accepted binomial name for any animal, or any organism visible with the naked eye. It is a species of soldier fly. The genus name Parapropalaehoplophorus (a fossil glyptodont, an extinct family of mammals related to armadillos) is two letters longer, but does not contain a similarly long species name.", "title": "Creations of long words" }, { "paragraph_id": 22, "text": "Aequeosalinocalcalinoceraceoaluminosocupreovitriolic, at 52 letters, describing the spa waters at Bath, England, is attributed to Dr. Edward Strother (1675–1737). The word is composed of the following elements:", "title": "Creations of long words" }, { "paragraph_id": 23, "text": "The longest officially recognized place name in an English-speaking country is Taumatawhakatangihangakoauauotamateaturipukakapikimaungahoronukupokaiwhenuakitanatahu (85 letters), which is a hill in New Zealand. The name is in the Māori language. A widely recognized version of the name is Taumatawhakatangihangakoauauotamateaturipukakapikimaungahoronukupokaiwhenuakitanatahu (85 letters), which appears on the signpost at the location (see the photo on this page). In Māori, the digraphs ng and wh are each treated as single letters.", "title": "Notable long words" }, { "paragraph_id": 24, "text": "In Canada, the longest place name is Dysart, Dudley, Harcourt, Guilford, Harburn, Bruton, Havelock, Eyre and Clyde, a township in Ontario, at 61 letters or 68 non-space characters.", "title": "Notable long words" }, { "paragraph_id": 25, "text": "The 58-letter name Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch is the name of a town on Anglesey, an island of Wales. In terms of the traditional Welsh alphabet, the name is only 51 letters long, as certain digraphs in Welsh are considered as single letters, for instance ll, ng and ch. It is generally agreed, however, that this invented name, adopted in the mid-19th century, was contrived solely to be the longest name of any town in Britain. The official name of the place is Llanfairpwllgwyngyll, commonly abbreviated to Llanfairpwll or Llanfair PG.", "title": "Notable long words" }, { "paragraph_id": 26, "text": "The longest non-contrived place name in the United Kingdom which is a single non-hyphenated word is Cottonshopeburnfoot (19 letters) and the longest which is hyphenated is Sutton-under-Whitestonecliffe (29 characters).", "title": "Notable long words" }, { "paragraph_id": 27, "text": "The longest place name in the United States (45 letters) is Chargoggagoggmanchauggagoggchaubunagungamaugg, a lake in Webster, Massachusetts. It means \"Fishing Place at the Boundaries – Neutral Meeting Grounds\" and is sometimes facetiously translated as \"you fish your side of the water, I fish my side of the water, nobody fishes the middle\". The lake is also known as Webster Lake. The longest hyphenated names in the U.S. are Winchester-on-the-Severn, a town in Maryland, and Washington-on-the-Brazos, a notable place in Texas history. The longest single-word town names in the U.S. are Kleinfeltersville, Pennsylvania and Mooselookmeguntic, Maine.", "title": "Notable long words" }, { "paragraph_id": 28, "text": "The longest official geographical name in Australia is Mamungkukumpurangkuntjunya. It has 26 letters and is a Pitjantjatjara word meaning \"where the Devil urinates\".", "title": "Notable long words" }, { "paragraph_id": 29, "text": "Liechtenstein is the longest single-word country name in English, and the second-longest is Turkmenistan.", "title": "Notable long words" }, { "paragraph_id": 30, "text": "Guinness World Records formerly contained a category for longest personal name used.", "title": "Notable long words" }, { "paragraph_id": 31, "text": "Long birth names are often coined in protest of naming laws or for other personal reasons.", "title": "Notable long words" } ]
The identity of the longest word in English depends on the definition of a word and of length. Words may be derived naturally from the language's roots or formed by coinage and construction. Additionally, comparisons are complicated because place names may be considered words, technical terms may be arbitrarily long, and the addition of suffixes and prefixes may extend the length of words to create grammatically correct but unused or novel words. Different dictionaries include and omit different words. The length of a word may also be understood in multiple ways. Most commonly, length is based on orthography and counting the number of written letters. Alternate, but less common, approaches include phonology and the number of phonemes (sounds).
2001-10-04T09:00:11Z
2023-11-27T13:23:30Z
[ "Template:Short description", "Template:Fact", "Template:Not a typo", "Template:Main", "Template:Spoken Wikipedia", "Template:Mono", "Template:Reflist", "Template:Dead link", "Template:Columns-list", "Template:Cbignore", "Template:Cite book", "Template:Pp-vandalism", "Template:Uniprot", "Template:Shy", "Template:Nbs", "Template:Visible anchor", "Template:Cite journal", "Template:Citation", "Template:Wiktionary category 2", "Template:Webarchive", "Template:Cite news", "Template:Nowrap", "Template:See also", "Template:IPA-sv", "Template:Original research", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Longest_word_in_English
9,469
Eric S. Raymond
Eric Steven Raymond (born December 4, 1957), often referred to as ESR, is an American software developer, open-source software advocate, and author of the 1997 essay and 1999 book The Cathedral and the Bazaar. He wrote a guidebook for the Roguelike game NetHack. In the 1990s, he edited and updated the Jargon File, published as The New Hacker's Dictionary. Raymond was born in Boston, Massachusetts in 1957, and lived in Venezuela as a child. His family moved to Pennsylvania in 1971. He developed cerebral palsy at birth; his weakened physical condition motivated him to go into computing. Raymond began his programming career writing proprietary software, between 1980 and 1985. In 1990, noting that the Jargon File had not been maintained since about 1983, he adopted it, but not without criticism; Paul Dourish maintains an archived original version of the Jargon File, because, he says, Raymond's updates "essentially destroyed what held it together." In 1996 Raymond took over development of the open-source email software "popclient", renaming it to Fetchmail. Soon after this experience, in 1997, he wrote the essay "The Cathedral and the Bazaar", detailing his thoughts on open-source software development and why it should be done as openly as possible (the "bazaar" approach). The essay was based in part on his experience in developing Fetchmail. He first presented his thesis at the annual Linux Kongress on May 27, 1997. He later expanded the essay into a book, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, in 1999. The essay has been widely cited. The internal white paper by Frank Hecker that led to the release of the Mozilla (then Netscape) source code in 1998 cited The Cathedral and the Bazaar as "independent validation" of ideas proposed by Eric Hahn and Jamie Zawinski. Hahn would later describe the 1999 book as "clearly influential". From the late 1990s onward, due in part to the popularity of his essay, Raymond became a prominent voice in the open source movement. He co-founded the Open Source Initiative (OSI) in 1998, taking on the self-appointed role of ambassador of open source to the press, business and public. He remains active in OSI, but stepped down as president of the initiative in February 2005. In early March 2020, he was removed from two Open Source Initiative mailing lists due to posts that violated the OSI's Code of Conduct. In 1998 Raymond received and published a Microsoft document expressing worry about the quality of rival open-source software. He named this document, together with others subsequently leaked, "The Halloween Documents". In 2000–2002 he created Configuration Menu Language 2 (CML2), a source code configuration system; while originally intended for the Linux operating system, it was rejected by kernel developers. (Raymond attributed this rejection to "kernel list politics", but Linus Torvalds said in a 2007 mailing list post that as a matter of policy, the development team preferred more incremental changes.) Raymond's 2003 book The Art of Unix Programming discusses user tools for programming and other tasks. Some versions of NetHack still include Raymond's guide. He has also contributed code and content to the free software video game The Battle for Wesnoth. Raymond is the main developer of NTPSec, a "secure, hardened replacement" for the Unix utility NTP. Raymond coined an aphorism he dubbed Linus's law, inspired by Linus Torvalds: "Given enough eyeballs, all bugs are shallow". It first appeared in his book The Cathedral and the Bazaar. Raymond has refused to speculate on whether the "bazaar" development model could be applied to works such as books and music, saying that he does not want to "weaken the winning argument for open-sourcing software by tying it to a potential loser". Raymond has had a number of public disputes with other figures in the free software movement. As head of the Open Source Initiative, he argued that advocates should focus on the potential for better products. The "very seductive" moral and ethical rhetoric of Richard Stallman and the Free Software Foundation fails, he said, "not because his principles are wrong, but because that kind of language ... simply does not persuade anybody". In a 2008 essay he defended programmers' right to issue work under proprietary licenses: "I think that if a programmer wants to write a program and sell it, it's neither my business nor anyone else's but his customer's what the terms of sale are." In the same essay he said that the "logic of the system" puts developers into "dysfunctional roles", with bad code the result. Raymond is a member of the Libertarian Party and a gun rights advocate. He has endorsed the open source firearms organization Defense Distributed, calling them "friends of freedom" and writing "I approve of any development that makes it more difficult for governments and criminals to monopolize the use of force. As 3D printers become less expensive and more ubiquitous, this could be a major step in the right direction." In 2015 Raymond accused the Ada Initiative and other women in tech groups of attempting to entrap male open source leaders and accuse them of rape, saying "Try to avoid even being alone, ever, because there is a chance that a 'women in tech' advocacy group is going to try to collect your scalp." Raymond has claimed that "Gays experimented with unfettered promiscuity in the 1970s and got AIDS as a consequence", and that "Police who react to a random black male behaving suspiciously who might be in the critical age range as though he is an near-imminent lethal threat, are being rational, not racist." A progressive campaign, "The Great Slate", was successful in raising funds for candidates in part by asking for contributions from tech workers in return for not posting similar quotes by Raymond. Matasano Security employee and Great Slate fundraiser Thomas Ptacek said, "I've been torturing Twitter with lurid Eric S. Raymond quotes for years. Every time I do, 20 people beg me to stop." It is estimated that, as of March 2018, over $30,000 has been raised in this way. Raymond describes himself as neo-pagan.
[ { "paragraph_id": 0, "text": "Eric Steven Raymond (born December 4, 1957), often referred to as ESR, is an American software developer, open-source software advocate, and author of the 1997 essay and 1999 book The Cathedral and the Bazaar. He wrote a guidebook for the Roguelike game NetHack. In the 1990s, he edited and updated the Jargon File, published as The New Hacker's Dictionary.", "title": "" }, { "paragraph_id": 1, "text": "Raymond was born in Boston, Massachusetts in 1957, and lived in Venezuela as a child. His family moved to Pennsylvania in 1971. He developed cerebral palsy at birth; his weakened physical condition motivated him to go into computing.", "title": "Early life" }, { "paragraph_id": 2, "text": "Raymond began his programming career writing proprietary software, between 1980 and 1985. In 1990, noting that the Jargon File had not been maintained since about 1983, he adopted it, but not without criticism; Paul Dourish maintains an archived original version of the Jargon File, because, he says, Raymond's updates \"essentially destroyed what held it together.\"", "title": "Career" }, { "paragraph_id": 3, "text": "In 1996 Raymond took over development of the open-source email software \"popclient\", renaming it to Fetchmail. Soon after this experience, in 1997, he wrote the essay \"The Cathedral and the Bazaar\", detailing his thoughts on open-source software development and why it should be done as openly as possible (the \"bazaar\" approach). The essay was based in part on his experience in developing Fetchmail. He first presented his thesis at the annual Linux Kongress on May 27, 1997. He later expanded the essay into a book, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, in 1999. The essay has been widely cited. The internal white paper by Frank Hecker that led to the release of the Mozilla (then Netscape) source code in 1998 cited The Cathedral and the Bazaar as \"independent validation\" of ideas proposed by Eric Hahn and Jamie Zawinski. Hahn would later describe the 1999 book as \"clearly influential\".", "title": "Career" }, { "paragraph_id": 4, "text": "From the late 1990s onward, due in part to the popularity of his essay, Raymond became a prominent voice in the open source movement. He co-founded the Open Source Initiative (OSI) in 1998, taking on the self-appointed role of ambassador of open source to the press, business and public. He remains active in OSI, but stepped down as president of the initiative in February 2005. In early March 2020, he was removed from two Open Source Initiative mailing lists due to posts that violated the OSI's Code of Conduct.", "title": "Career" }, { "paragraph_id": 5, "text": "In 1998 Raymond received and published a Microsoft document expressing worry about the quality of rival open-source software. He named this document, together with others subsequently leaked, \"The Halloween Documents\".", "title": "Career" }, { "paragraph_id": 6, "text": "In 2000–2002 he created Configuration Menu Language 2 (CML2), a source code configuration system; while originally intended for the Linux operating system, it was rejected by kernel developers. (Raymond attributed this rejection to \"kernel list politics\", but Linus Torvalds said in a 2007 mailing list post that as a matter of policy, the development team preferred more incremental changes.) Raymond's 2003 book The Art of Unix Programming discusses user tools for programming and other tasks.", "title": "Career" }, { "paragraph_id": 7, "text": "Some versions of NetHack still include Raymond's guide. He has also contributed code and content to the free software video game The Battle for Wesnoth.", "title": "Career" }, { "paragraph_id": 8, "text": "Raymond is the main developer of NTPSec, a \"secure, hardened replacement\" for the Unix utility NTP.", "title": "Career" }, { "paragraph_id": 9, "text": "Raymond coined an aphorism he dubbed Linus's law, inspired by Linus Torvalds: \"Given enough eyeballs, all bugs are shallow\". It first appeared in his book The Cathedral and the Bazaar.", "title": "Views on open source" }, { "paragraph_id": 10, "text": "Raymond has refused to speculate on whether the \"bazaar\" development model could be applied to works such as books and music, saying that he does not want to \"weaken the winning argument for open-sourcing software by tying it to a potential loser\".", "title": "Views on open source" }, { "paragraph_id": 11, "text": "Raymond has had a number of public disputes with other figures in the free software movement. As head of the Open Source Initiative, he argued that advocates should focus on the potential for better products. The \"very seductive\" moral and ethical rhetoric of Richard Stallman and the Free Software Foundation fails, he said, \"not because his principles are wrong, but because that kind of language ... simply does not persuade anybody\".", "title": "Views on open source" }, { "paragraph_id": 12, "text": "In a 2008 essay he defended programmers' right to issue work under proprietary licenses: \"I think that if a programmer wants to write a program and sell it, it's neither my business nor anyone else's but his customer's what the terms of sale are.\" In the same essay he said that the \"logic of the system\" puts developers into \"dysfunctional roles\", with bad code the result.", "title": "Views on open source" }, { "paragraph_id": 13, "text": "Raymond is a member of the Libertarian Party and a gun rights advocate. He has endorsed the open source firearms organization Defense Distributed, calling them \"friends of freedom\" and writing \"I approve of any development that makes it more difficult for governments and criminals to monopolize the use of force. As 3D printers become less expensive and more ubiquitous, this could be a major step in the right direction.\"", "title": "Political beliefs and activism" }, { "paragraph_id": 14, "text": "In 2015 Raymond accused the Ada Initiative and other women in tech groups of attempting to entrap male open source leaders and accuse them of rape, saying \"Try to avoid even being alone, ever, because there is a chance that a 'women in tech' advocacy group is going to try to collect your scalp.\"", "title": "Political beliefs and activism" }, { "paragraph_id": 15, "text": "Raymond has claimed that \"Gays experimented with unfettered promiscuity in the 1970s and got AIDS as a consequence\", and that \"Police who react to a random black male behaving suspiciously who might be in the critical age range as though he is an near-imminent lethal threat, are being rational, not racist.\" A progressive campaign, \"The Great Slate\", was successful in raising funds for candidates in part by asking for contributions from tech workers in return for not posting similar quotes by Raymond. Matasano Security employee and Great Slate fundraiser Thomas Ptacek said, \"I've been torturing Twitter with lurid Eric S. Raymond quotes for years. Every time I do, 20 people beg me to stop.\" It is estimated that, as of March 2018, over $30,000 has been raised in this way.", "title": "Political beliefs and activism" }, { "paragraph_id": 16, "text": "Raymond describes himself as neo-pagan.", "title": "Religious beliefs" } ]
Eric Steven Raymond, often referred to as ESR, is an American software developer, open-source software advocate, and author of the 1997 essay and 1999 book The Cathedral and the Bazaar. He wrote a guidebook for the Roguelike game NetHack. In the 1990s, he edited and updated the Jargon File, published as The New Hacker's Dictionary.
2001-10-02T19:05:45Z
2023-11-21T12:00:49Z
[ "Template:Cite book", "Template:Short description", "Template:Third-party", "Template:Use mdy dates", "Template:Reflist", "Template:Wikiquote", "Template:Official website", "Template:Internet Archive author", "Template:IMDb name", "Template:Rp", "Template:ISBN", "Template:Cite news", "Template:Webarchive", "Template:Authority control", "Template:Infobox person", "Template:Commons", "Template:Linux people", "Template:Portal bar", "Template:Redirect", "Template:Cite web", "Template:Gutenberg author" ]
https://en.wikipedia.org/wiki/Eric_S._Raymond
9,471
Externalization (psychology)
Externalization is a term used in psychoanalytic theory which describes the tendency to project one's internal states onto the outside world. It is generally regarded as an unconscious defense mechanism, thus the person is unaware they are doing it. Externalization takes on a different meaning in narrative therapy, where the client is encouraged to externalize a problem in order to gain a new perspective on it. In Freudian psychology, externalization (or externalisation) is a defense mechanism by which an individual projects their own internal characteristics onto the outside world, particularly onto other people. For example, a patient who is overly argumentative might instead perceive others as argumentative and themselves as blameless. Like other defense mechanisms, externalization can be a protection against anxiety and is, therefore, part of a healthy, normally functioning mind. However, if taken to excess, it can lead to the development of a neurosis. Michael White states that the problem of the client is externalized, to alter the client's point of view. Problems with self-regulation, including impulsivity, violence, sensation-seeking, and rule-breaking, are indicative of an externalizing risk pathway. A discrepancy exists between bottom-up reward-related circuitry, such as the ventral striatum, and top-down inhibitory control circuitry, which is located in the prefrontal cortex, linking externalizing behaviors. Externalization is often related to substance use disorders. In particular, alcohol use disorder is one of disorders that much externalization research has been dedicated to. Often, issues within the externalizing risk pathway, namely vulnerabilities in self-regulation, may impact the development of alcohol use disorder differently across stages of the addiction cycle. Likewise, marijuana use has been linked to an externalizing pathway that highlights aggressive and delinquent behavior. Another type of disorder that is linked to the externalizing pathway is Antisocial Personality Disorder due to its tendency to relate by lack of constraint. Much research has examined the similarities of antisocial personality disorder and substance use disorder in relation to externalizing behaviors.
[ { "paragraph_id": 0, "text": "Externalization is a term used in psychoanalytic theory which describes the tendency to project one's internal states onto the outside world. It is generally regarded as an unconscious defense mechanism, thus the person is unaware they are doing it. Externalization takes on a different meaning in narrative therapy, where the client is encouraged to externalize a problem in order to gain a new perspective on it.", "title": "" }, { "paragraph_id": 1, "text": "In Freudian psychology, externalization (or externalisation) is a defense mechanism by which an individual projects their own internal characteristics onto the outside world, particularly onto other people. For example, a patient who is overly argumentative might instead perceive others as argumentative and themselves as blameless.", "title": "Psychoanalysis" }, { "paragraph_id": 2, "text": "Like other defense mechanisms, externalization can be a protection against anxiety and is, therefore, part of a healthy, normally functioning mind. However, if taken to excess, it can lead to the development of a neurosis.", "title": "Psychoanalysis" }, { "paragraph_id": 3, "text": "Michael White states that the problem of the client is externalized, to alter the client's point of view.", "title": "Narrative therapy" }, { "paragraph_id": 4, "text": "Problems with self-regulation, including impulsivity, violence, sensation-seeking, and rule-breaking, are indicative of an externalizing risk pathway. A discrepancy exists between bottom-up reward-related circuitry, such as the ventral striatum, and top-down inhibitory control circuitry, which is located in the prefrontal cortex, linking externalizing behaviors. Externalization is often related to substance use disorders. In particular, alcohol use disorder is one of disorders that much externalization research has been dedicated to. Often, issues within the externalizing risk pathway, namely vulnerabilities in self-regulation, may impact the development of alcohol use disorder differently across stages of the addiction cycle. Likewise, marijuana use has been linked to an externalizing pathway that highlights aggressive and delinquent behavior. Another type of disorder that is linked to the externalizing pathway is Antisocial Personality Disorder due to its tendency to relate by lack of constraint. Much research has examined the similarities of antisocial personality disorder and substance use disorder in relation to externalizing behaviors.", "title": "Neuroscience of externalization" }, { "paragraph_id": 5, "text": "", "title": "References" } ]
Externalization is a term used in psychoanalytic theory which describes the tendency to project one's internal states onto the outside world. It is generally regarded as an unconscious defense mechanism, thus the person is unaware they are doing it. Externalization takes on a different meaning in narrative therapy, where the client is encouraged to externalize a problem in order to gain a new perspective on it.
2001-05-16T19:44:44Z
2023-11-28T04:26:45Z
[ "Template:Psych-stub", "Template:Short description", "Template:Main", "Template:Broader", "Template:Reflist", "Template:Cite book", "Template:Cite journal", "Template:Citation" ]
https://en.wikipedia.org/wiki/Externalization_(psychology)
9,472
Euro
The euro (symbol: €; currency code: EUR) is the official currency of 20 of the 27 member states of the European Union. This group of states is officially known as the euro area or, commonly, the eurozone, and includes about 344 million citizens as of 2023. The euro is divided into 100 euro cents. The currency is also used officially by the institutions of the European Union, by four European microstates that are not EU members, the British Overseas Territory of Akrotiri and Dhekelia, as well as unilaterally by Montenegro and Kosovo. Outside Europe, a number of special territories of EU members also use the euro as their currency. Additionally, over 200 million people worldwide use currencies pegged to the euro. The euro is the second-largest reserve currency as well as the second-most traded currency in the world after the United States dollar. As of December 2019, with more than €1.3 trillion in circulation, the euro has one of the highest combined values of banknotes and coins in circulation in the world. The name euro was officially adopted on 16 December 1995 in Madrid. The euro was introduced to world financial markets as an accounting currency on 1 January 1999, replacing the former European Currency Unit (ECU) at a ratio of 1:1 (US$1.1743 at the time). Physical euro coins and banknotes entered into circulation on 1 January 2002, making it the day-to-day operating currency of its original members, and by March 2002 it had completely replaced the former currencies. Between December 1999 and December 2002, the euro traded below the US dollar, but has since traded near parity with or above the US dollar, peaking at US$1.60 on 18 July 2008 and since then returning near to its original issue rate. On 13 July 2022, the two currencies hit parity for the first time in nearly two decades due in part to the 2022 Russian invasion of Ukraine. The euro is managed and administered by the European Central Bank (ECB, Frankfurt am Main) and the Eurosystem, composed of the central banks of the eurozone countries. As an independent central bank, the ECB has sole authority to set monetary policy. The Eurosystem participates in the printing, minting and distribution of notes and coins in all member states, and the operation of the eurozone payment systems. The 1992 Maastricht Treaty obliges most EU member states to adopt the euro upon meeting certain monetary and budgetary convergence criteria, although not all participating states have done so. Denmark has negotiated exemptions, while Sweden (which joined the EU in 1995, after the Maastricht Treaty was signed) turned down the euro in a non-binding referendum in 2003, and has circumvented the obligation to adopt the euro by not meeting the monetary and budgetary requirements. All nations that have joined the EU since 1993 have pledged to adopt the euro in due course. The Maastricht Treaty was later amended by the Treaty of Nice, which closed the gaps and loopholes in the Maastricht and Rome Treaties. The 20 participating members are The EU member states not in the Eurozone are Bulgaria, Czech Republic, Denmark, Hungary, Poland, Romania, and Sweden. The government of Bulgaria aims to replace the Bulgarian lev by the euro on 1 January 2025. The government of Romania aims for the Romanian leu to be replaced by the euro on 1 January 2026. EU members Czech Republic, Hungary, Poland, and Sweden are legally obligated to adopt the euro eventually, though they have no required date for adoption, and their governments do not currently have any plans for switching. Denmark negotiated for the right to retain its currency. Microstates with a monetary agreement: EU special territories British Overseas Territory Unilateral adopters The currency of a number of states is pegged to the euro. These states are: The euro is divided into 100 cents (also referred to as euro cents, especially when distinguishing them from other currencies, and referred to as such on the common side of all cent coins). In Community legislative acts the plural forms of euro and cent are spelled without the s, notwithstanding normal English usage. Otherwise, normal English plurals are used, with many local variations such as centime in France. All circulating coins have a common side showing the denomination or value, and a map in the background. Due to the linguistic plurality in the European Union, the Latin alphabet version of euro is used (as opposed to the less common Greek or Cyrillic) and Arabic numerals (other text is used on national sides in national languages, but other text on the common side is avoided). For the denominations except the 1-, 2- and 5-cent coins, the map only showed the 15 member states which were members when the euro was introduced. Beginning in 2007 or 2008 (depending on the country), the old map was replaced by a map of Europe also showing countries outside the EU. The 1-, 2- and 5-cent coins, however, keep their old design, showing a geographical map of Europe with the 15 member states of 2002 raised somewhat above the rest of the map. All common sides were designed by Luc Luycx. The coins also have a national side showing an image specifically chosen by the country that issued the coin. Euro coins from any member state may be freely used in any nation that has adopted the euro. The coins are issued in denominations of €2, €1, 50c, 20c, 10c, 5c, 2c, and 1c. To avoid the use of the two smallest coins, some cash transactions are rounded to the nearest five cents in the Netherlands and Ireland (by voluntary agreement) and in Finland and Italy (by law). This practice is discouraged by the commission, as is the practice of certain shops of refusing to accept high-value euro notes. Commemorative coins with €2 face value have been issued with changes to the design of the national side of the coin. These include both commonly issued coins, such as the €2 commemorative coin for the fiftieth anniversary of the signing of the Treaty of Rome, and nationally issued coins, such as the coin to commemorate the 2004 Summer Olympics issued by Greece. These coins are legal tender throughout the eurozone. Collector coins with various other denominations have been issued as well, but these are not intended for general circulation, and they are legal tender only in the member state that issued them. A number of institutions are authorised to mint euro coins: The design for the euro banknotes has common designs on both sides. The design was created by the Austrian designer Robert Kalina. Notes are issued in €500, €200, €100, €50, €20, €10, and €5. Each banknote has its own colour and is dedicated to an artistic period of European architecture. The front of the note features windows or gateways while the back has bridges, symbolising links between states in the union and with the future. While the designs are supposed to be devoid of any identifiable characteristics, the initial designs by Robert Kalina were of specific bridges, including the Rialto and the Pont de Neuilly, and were subsequently rendered more generic; the final designs still bear very close similarities to their specific prototypes; thus they are not truly generic. The monuments looked similar enough to different national monuments to please everyone. The Europa series, or second series, consists of six denominations and no longer includes the €500 with issuance discontinued as of 27 April 2019. However, both the first and the second series of euro banknotes, including the €500, remain legal tender throughout the euro area. In December 2021, the ECB announced its plans to redesign euro banknotes by 2024. A theme advisory group, made up of one member from each euro area country, was selected to submit theme proposals to the ECB. The proposals will be voted on by the public; a design competition will also be held. Since 1 January 2002, the national central banks (NCBs) and the ECB have issued euro banknotes on a joint basis. Eurosystem NCBs are required to accept euro banknotes put into circulation by other Eurosystem members and these banknotes are not repatriated. The ECB issues 8% of the total value of banknotes issued by the Eurosystem. In practice, the ECB's banknotes are put into circulation by the NCBs, thereby incurring matching liabilities vis-à-vis the ECB. These liabilities carry interest at the main refinancing rate of the ECB. The other 92% of euro banknotes are issued by the NCBs in proportion to their respective shares of the ECB capital key, calculated using national share of European Union (EU) population and national share of EU GDP, equally weighted. Member states are authorised to print or to commission bank note printing. As of November 2022, these are the printers: Capital within the EU may be transferred in any amount from one state to another. All intra-Union transfers in euro are treated as domestic transactions and bear the corresponding domestic transfer costs. This includes all member states of the EU, even those outside the eurozone providing the transactions are carried out in euro. Credit/debit card charging and ATM withdrawals within the eurozone are also treated as domestic transactions; however paper-based payment orders, like cheques, have not been standardised so these are still domestic-based. The ECB has also set up a clearing system, TARGET, for large euro transactions. The euro was established by the provisions in the 1992 Maastricht Treaty. To participate in the currency, member states are meant to meet strict criteria, such as a budget deficit of less than 3% of their GDP, a debt ratio of less than 60% of GDP (both of which were ultimately widely flouted after introduction), low inflation, and interest rates close to the EU average. In the Maastricht Treaty, the United Kingdom and Denmark were granted exemptions per their request from moving to the stage of monetary union which resulted in the introduction of the euro. The name "euro" was officially adopted in Madrid on 16 December 1995. Belgian Esperantist Germain Pirlot, a former teacher of French and history, is credited with naming the new currency by sending a letter to then President of the European Commission, Jacques Santer, suggesting the name "euro" on 4 August 1995. Due to differences in national conventions for rounding and significant digits, all conversion between the national currencies had to be carried out using the process of triangulation via the euro. The definitive values of one euro in terms of the exchange rates at which the currency entered the euro are shown in the table. The rates were determined by the Council of the European Union, based on a recommendation from the European Commission based on the market rates on 31 December 1998. They were set so that one European Currency Unit (ECU) would equal one euro. The European Currency Unit was an accounting unit used by the EU, based on the currencies of the member states; it was not a currency in its own right. They could not be set earlier, because the ECU depended on the closing exchange rate of the non-euro currencies (principally pound sterling) that day. The procedure used to fix the conversion rate between the Greek drachma and the euro was different since the euro by then was already two years old. While the conversion rates for the initial eleven currencies were determined only hours before the euro was introduced, the conversion rate for the Greek drachma was fixed several months beforehand. The currency was introduced in non-physical form (traveller's cheques, electronic transfers, banking, etc.) at midnight on 1 January 1999, when the national currencies of participating countries (the eurozone) ceased to exist independently. Their exchange rates were locked at fixed rates against each other. The euro thus became the successor to the European Currency Unit (ECU). The notes and coins for the old currencies, however, continued to be used as legal tender until new euro notes and coins were introduced on 1 January 2002. The changeover period during which the former currencies' notes and coins were exchanged for those of the euro lasted about two months, until 28 February 2002. The official date on which the national currencies ceased to be legal tender varied from member state to member state. The earliest date was in Germany, where the mark officially ceased to be legal tender on 31 December 2001, though the exchange period lasted for two months more. Even after the old currencies ceased to be legal tender, they continued to be accepted by national central banks for periods ranging from several years to indefinitely (the latter for Austria, Germany, Ireland, Estonia and Latvia in banknotes and coins, and for Belgium, Luxembourg, Slovenia and Slovakia in banknotes only). The earliest coins to become non-convertible were the Portuguese escudos, which ceased to have monetary value after 31 December 2002, although banknotes remained exchangeable until 2022. A special euro currency sign (€) was designed after a public survey had narrowed ten of the original thirty proposals down to two. The President of the European Commission at the time (Jacques Santer) and the European Commissioner with responsibility for the euro (Yves-Thibault de Silguy) then chose the winning design. Regarding the symbol, the European Commission stated on behalf of the European Union: The symbol € is based on the Greek letter epsilon (Є), with the first letter in the word "Europe" and with 2 parallel lines signifying stability. The European Commission also specified a euro logo with exact proportions. Placement of the currency sign relative to the numeric amount varies from state to state, but for texts in English published by EU institutions, the symbol (or the ISO-standard "EUR") should precede the amount. Following the U.S. financial crisis in 2008, fears of a sovereign debt crisis developed in 2009 among investors concerning some European states, with the situation becoming particularly tense in early 2010. Greece was most acutely affected, but fellow Eurozone members Cyprus, Ireland, Italy, Portugal, and Spain were also significantly affected. All these countries used EU funds except Italy, which is a major donor to the EFSF. To be included in the eurozone, countries had to fulfil certain convergence criteria, but the meaningfulness of such criteria was diminished by the fact it was not enforced with the same level of strictness among countries. According to the Economist Intelligence Unit in 2011, "[I]f the [euro area] is treated as a single entity, its [economic and fiscal] position looks no worse and in some respects, rather better than that of the US or the UK" and the budget deficit for the euro area as a whole is much lower and the euro area's government debt/GDP ratio of 86% in 2010 was about the same level as that of the United States. "Moreover", they write, "private-sector indebtedness across the euro area as a whole is markedly lower than in the highly leveraged Anglo-Saxon economies". The authors conclude that the crisis "is as much political as economic" and the result of the fact that the euro area lacks the support of "institutional paraphernalia (and mutual bonds of solidarity) of a state". The crisis continued with S&P downgrading the credit rating of nine euro-area countries, including France, then downgrading the entire European Financial Stability Facility (EFSF) fund. A historical parallel – to 1931 when Germany was burdened with debt, unemployment and austerity while France and the United States were relatively strong creditors – gained attention in summer 2012 even as Germany received a debt-rating warning of its own. The euro is the sole currency of 20 EU member states: Austria, Belgium, Croatia, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Portugal, Slovakia, Slovenia, and Spain. These countries constitute the "eurozone", some 347 million people in total as of 2023. According to bilateral agreements with the EU, the euro has also been designated as the sole and official currency in a further four European microstates awarded minting rights (Andorra, Monaco, San Marino and the Vatican City). With all but one (Denmark) EU members obliged to join when economic conditions permit, together with future members of the EU, the enlargement of the eurozone is set to continue. The euro is also the sole currency in three overseas territories of France that are not themselves part of the EU, namely Saint Barthélemy, Saint Pierre and Miquelon, and the French Southern and Antarctic Lands, as well as in the British Overseas Territory of Akrotiri and Dhekelia. The euro has been adopted unilaterally as the sole currency of Montenegro and Kosovo. It has also been used as a foreign trading currency in Cuba since 1998, Syria since 2006, and Venezuela since 2018. In 2009, Zimbabwe abandoned its local currency and introduced major global convertible currencies instead, including the euro and the United States dollar. The direct usage of the euro outside of the official framework of the EU affects nearly 3 million people. Outside the eurozone, two EU member states have currencies that are pegged to the euro, which is a precondition to joining the eurozone. The Danish krone and Bulgarian lev are pegged due to their participation in the ERM II. Additionally, a total of 21 countries and territories that do not belong to the EU have currencies that are directly pegged to the euro including 14 countries in mainland Africa (CFA franc), two African island countries (Comorian franc and Cape Verdean escudo), three French Pacific territories (CFP franc) and two Balkan countries, Bosnia and Herzegovina (Bosnia and Herzegovina convertible mark) and North Macedonia (Macedonian denar). On 1 January 2010, the dobra of São Tomé and Príncipe was officially linked with the euro. Additionally, the Moroccan dirham is tied to a basket of currencies, including the euro and the US dollar, with the euro given the highest weighting. These countries generally had previously implemented a currency peg to one of the major European currencies (e.g. the French franc, Deutsche Mark or Portuguese escudo), and when these currencies were replaced by the euro their currencies became pegged to the euro. Pegging a country's currency to a major currency is regarded as a safety measure, especially for currencies of areas with weak economies, as the euro is seen as a stable currency, prevents runaway inflation, and encourages foreign investment due to its stability. In total, as of 2013, 182 million people in Africa use a currency pegged to the euro, 27 million people outside the eurozone in Europe, and another 545,000 people on Pacific islands. Since 2005, stamps issued by the Sovereign Military Order of Malta have been denominated in euros, although the Order's official currency remains the Maltese scudo. The Maltese scudo itself is pegged to the euro and is only recognised as legal tender within the Order. Since its introduction in 1999, the euro has been the second most widely held international reserve currency after the U.S. dollar. The share of the euro as a reserve currency increased from 18% in 1999 to 27% in 2008. Over this period, the share held in U.S. dollar fell from 71% to 64% and that held in RMB fell from 6.4% to 3.3%. The euro inherited and built on the status of the Deutsche Mark as the second most important reserve currency. The euro remains underweight as a reserve currency in advanced economies while overweight in emerging and developing economies: according to the International Monetary Fund the total of euro held as a reserve in the world at the end of 2008 was equal to $1.1 trillion or €850 billion, with a share of 22% of all currency reserves in advanced economies, but a total of 31% of all currency reserves in emerging and developing economies. The possibility of the euro becoming the first international reserve currency has been debated among economists. Former Federal Reserve Chairman Alan Greenspan gave his opinion in September 2007 that it was "absolutely conceivable that the euro will replace the US dollar as reserve currency, or will be traded as an equally important reserve currency". In contrast to Greenspan's 2007 assessment, the euro's increase in the share of the worldwide currency reserve basket has slowed considerably since 2007 and since the beginning of the worldwide credit crunch related recession and European sovereign-debt crisis. In economics, an optimum currency area, or region (OCA or OCR), is a geographical region in which it would maximise economic efficiency to have the entire region share a single currency. There are two models, both proposed by Robert Mundell: the stationary expectations model and the international risk sharing model. Mundell himself advocates the international risk sharing model and thus concludes in favour of the euro. However, even before the creation of the single currency, there were concerns over diverging economies. Before the late-2000s recession it was considered unlikely that a state would leave the euro or the whole zone would collapse. However the Greek government-debt crisis led to former British Foreign Secretary Jack Straw claiming the eurozone could not last in its current form. Part of the problem seems to be the rules that were created when the euro was set up. John Lanchester, writing for The New Yorker, explains it: The guiding principle of the currency, which opened for business in 1999, were supposed to be a set of rules to limit a country's annual deficit to three per cent of gross domestic product, and the total accumulated debt to sixty per cent of G.D.P. It was a nice idea, but by 2004 the two biggest economies in the euro zone, Germany and France, had broken the rules for three years in a row. The most obvious benefit of adopting a single currency is to remove the cost of exchanging currency, theoretically allowing businesses and individuals to consummate previously unprofitable trades. For consumers, banks in the eurozone must charge the same for intra-member cross-border transactions as purely domestic transactions for electronic payments (e.g., credit cards, debit cards and cash machine withdrawals). Financial markets on the continent are expected to be far more liquid and flexible than they were in the past. The reduction in cross-border transaction costs will allow larger banking firms to provide a wider array of banking services that can compete across and beyond the eurozone. However, although transaction costs were reduced, some studies have shown that risk aversion has increased during the last 40 years in the Eurozone. Another effect of the common European currency is that differences in prices—in particular in price levels—should decrease because of the law of one price. Differences in prices can trigger arbitrage, i.e., speculative trade in a commodity across borders purely to exploit the price differential. Therefore, prices on commonly traded goods are likely to converge, causing inflation in some regions and deflation in others during the transition. Some evidence of this has been observed in specific eurozone markets. Before the introduction of the euro, some countries had successfully contained inflation, which was then seen as a major economic problem, by establishing largely independent central banks. One such bank was the Bundesbank in Germany; the European Central Bank was modelled on the Bundesbank. The euro has come under criticism due to its regulation, lack of flexibility and rigidity towards sharing member states on issues such as nominal interest rates. Many national and corporate bonds denominated in euro are significantly more liquid and have lower interest rates than was historically the case when denominated in national currencies. While increased liquidity may lower the nominal interest rate on the bond, denominating the bond in a currency with low levels of inflation arguably plays a much larger role. A credible commitment to low levels of inflation and a stable debt reduces the risk that the value of the debt will be eroded by higher levels of inflation or default in the future, allowing debt to be issued at a lower nominal interest rate. There is also a cost in structurally keeping inflation lower than in the United States, United Kingdom, and China. The result is that seen from those countries, the euro has become expensive, making European products increasingly expensive for its largest importers; hence export from the eurozone becomes more difficult. In general, those in Europe who own large amounts of euro are served by high stability and low inflation. A monetary union means states in that union lose the main mechanism of recovery of their international competitiveness by weakening (depreciating) their currency. When wages become too high compared to productivity in the exports sector, then these exports become more expensive and they are crowded out from the market within a country and abroad. This drives the fall of employment and output in the exports sector and fall of trade and current account balances. Fall of output and employment in the tradable goods sector may be offset by the growth of non-exports sectors, especially in construction and services. Increased purchases abroad and negative current account balances can be financed without a problem as long as credit is cheap. The need to finance trade deficit weakens currency, making exports automatically more attractive in a country and abroad. A state in a monetary union cannot use weakening of currency to recover its international competitiveness. To achieve this a state has to reduce prices, including wages (deflation). This could result in high unemployment and lower incomes as it was during the European sovereign-debt crisis. The euro increased price transparency and stimulated cross-border trade. A 2009 consensus from the studies of the introduction of the euro concluded that it has increased trade within the eurozone by 5% to 10%, although one study suggested an increase of only 3% while another estimated 9 to 14%. However, a meta-analysis of all available studies suggests that the prevalence of positive estimates is caused by publication bias and that the underlying effect may be negligible. Although a more recent meta-analysis shows that publication bias decreases over time and that there are positive trade effects from the introduction of the euro, as long as results from before 2010 are taken into account. This may be because of the inclusion of the Financial crisis of 2007–2008 and ongoing integration within the EU. Furthermore, older studies based on certain methods of analysis of main trends reflecting general cohesion policies in Europe that started before, and continue after implementing the common currency find no effect on trade. These results suggest that other policies aimed at European integration might be the source of observed increase in trade. According to Barry Eichengreen, studies disagree on the magnitude of the effect of the euro on trade, but they agree that it did have an effect. Physical investment seems to have increased by 5% in the eurozone due to the introduction. Regarding foreign direct investment, a study found that the intra-eurozone FDI stocks have increased by about 20% during the first four years of the EMU. Concerning the effect on corporate investment, there is evidence that the introduction of the euro has resulted in an increase in investment rates and that it has made it easier for firms to access financing in Europe. The euro has most specifically stimulated investment in companies that come from countries that previously had weak currencies. A study found that the introduction of the euro accounts for 22% of the investment rate after 1998 in countries that previously had a weak currency. The introduction of the euro has led to extensive discussion about its possible effect on inflation. In the short term, there was a widespread impression in the population of the eurozone that the introduction of the euro had led to an increase in prices, but this impression was not confirmed by general indices of inflation and other studies. A study of this paradox found that this was due to an asymmetric effect of the introduction of the euro on prices: while it had no effect on most goods, it had an effect on cheap goods which have seen their price round up after the introduction of the euro. The study found that consumers based their beliefs on inflation of those cheap goods which are frequently purchased. It has also been suggested that the jump in small prices may be because prior to the introduction, retailers made fewer upward adjustments and waited for the introduction of the euro to do so. One of the advantages of the adoption of a common currency is the reduction of the risk associated with changes in currency exchange rates. It has been found that the introduction of the euro created "significant reductions in market risk exposures for nonfinancial firms both in and outside Europe". These reductions in market risk "were concentrated in firms domiciled in the eurozone and in non-euro firms with a high fraction of foreign sales or assets in Europe". The introduction of the euro increased European financial integration, which helped stimulate growth of a European securities market (bond markets are characterized by economies of scale dynamics). According to a study on this question, it has "significantly reshaped the European financial system, especially with respect to the securities markets [...] However, the real and policy barriers to integration in the retail and corporate banking sectors remain significant, even if the wholesale end of banking has been largely integrated." Specifically, the euro has significantly decreased the cost of trade in bonds, equity, and banking assets within the eurozone. On a global level, there is evidence that the introduction of the euro has led to an integration in terms of investment in bond portfolios, with eurozone countries lending and borrowing more between each other than with other countries. Financial integration made it cheaper for European companies to borrow. Banks, firms and households could also invest more easily outside of their own country, thus creating greater international risk-sharing. As of January 2014, and since the introduction of the euro, interest rates of most member countries (particularly those with a weak currency) have decreased. Some of these countries had the most serious sovereign financing problems. The effect of declining interest rates, combined with excess liquidity continually provided by the ECB, made it easier for banks within the countries in which interest rates fell the most, and their linked sovereigns, to borrow significant amounts (above the 3% of GDP budget deficit imposed on the eurozone initially) and significantly inflate their public and private debt levels. Following the financial crisis of 2007–2008, governments in these countries found it necessary to bail out or nationalise their privately held banks to prevent systemic failure of the banking system when underlying hard or financial asset values were found to be grossly inflated and sometimes so nearly worthless there was no liquid market for them. This further increased the already high levels of public debt to a level the markets began to consider unsustainable, via increasing government bond interest rates, producing the ongoing European sovereign-debt crisis. The evidence on the convergence of prices in the eurozone with the introduction of the euro is mixed. Several studies failed to find any evidence of convergence following the introduction of the euro after a phase of convergence in the early 1990s. Other studies have found evidence of price convergence, in particular for cars. A possible reason for the divergence between the different studies is that the processes of convergence may not have been linear, slowing down substantially between 2000 and 2003, and resurfacing after 2003 as suggested by a recent study (2009). A study suggests that the introduction of the euro has had a positive effect on the amount of tourist travel within the EMU, with an increase of 6.5%. The ECB targets interest rates rather than exchange rates and in general, does not intervene on the foreign exchange rate markets. This is because of the implications of the Mundell–Fleming model, which implies a central bank cannot (without capital controls) maintain interest rate and exchange rate targets simultaneously, because increasing the money supply results in a depreciation of the currency. In the years following the Single European Act, the EU has liberalised its capital markets and, as the ECB has inflation targeting as its monetary policy, the exchange-rate regime of the euro is floating. The euro is the second-most widely held reserve currency after the U.S. dollar. After its introduction on 4 January 1999 its exchange rate against the other major currencies fell reaching its lowest exchange rates in 2000 (3 May vs sterling, 25 October vs the U.S. dollar, 26 October vs Japanese yen). Afterwards it regained and its exchange rate reached its historical highest point in 2008 (15 July vs US dollar, 23 July vs Japanese yen, 29 December vs sterling). With the advent of the global financial crisis the euro initially fell, to regain later. Despite pressure due to the European sovereign-debt crisis the euro remained stable. In November 2011 the euro's exchange rate index – measured against currencies of the bloc's major trading partners – was trading almost two percent higher on the year, approximately at the same level as it was before the crisis kicked off in 2007. In mid July, 2022, the euro equalled the US dollar for a short period of time. Besides the economic motivations to the introduction of the euro, its creation was also partly justified as a way to foster a closer sense of joint identity between European citizens. Statements about this goal were for instance made by Wim Duisenberg, European Central Bank Governor, in 1998, Laurent Fabius, French Finance Minister, in 2000, and Romano Prodi, President of the European Commission, in 2002. However, 15 years after the introduction of the euro, a study found no evidence that it has had any effect on a shared sense of European identity. The formal titles of the currency are euro for the major unit and cent for the minor (one-hundredth) unit and for official use in most eurozone languages; according to the ECB, all languages should use the same spelling for the nominative singular. This may contradict normal rules for word formation in some languages. Bulgaria has negotiated an exception; euro in the Bulgarian Cyrillic alphabet is spelled eвро (evro) and not eуро (euro) in all official documents. In the Greek script the term ευρώ (evró) is used; the Greek "cent" coins are denominated in λεπτό/ά (leptó/á). Official practice for English-language EU legislation is to use the words euro and cent as both singular and plural, although the European Commission's Directorate-General for Translation states that the plural forms euros and cents should be used in English. The word 'euro' is pronounced differently according to pronunciation rules in the individual languages applied; in German [ˈɔʏʁo], in English /ˈjʊəroʊ/, in French [øʁo], etc. In summary: For local phonetics, cent, use of plural and amount formatting (€6,00 or 6.00 €), see Language and the euro.
[ { "paragraph_id": 0, "text": "The euro (symbol: €; currency code: EUR) is the official currency of 20 of the 27 member states of the European Union. This group of states is officially known as the euro area or, commonly, the eurozone, and includes about 344 million citizens as of 2023. The euro is divided into 100 euro cents.", "title": "" }, { "paragraph_id": 1, "text": "The currency is also used officially by the institutions of the European Union, by four European microstates that are not EU members, the British Overseas Territory of Akrotiri and Dhekelia, as well as unilaterally by Montenegro and Kosovo. Outside Europe, a number of special territories of EU members also use the euro as their currency. Additionally, over 200 million people worldwide use currencies pegged to the euro.", "title": "" }, { "paragraph_id": 2, "text": "The euro is the second-largest reserve currency as well as the second-most traded currency in the world after the United States dollar. As of December 2019, with more than €1.3 trillion in circulation, the euro has one of the highest combined values of banknotes and coins in circulation in the world.", "title": "" }, { "paragraph_id": 3, "text": "The name euro was officially adopted on 16 December 1995 in Madrid. The euro was introduced to world financial markets as an accounting currency on 1 January 1999, replacing the former European Currency Unit (ECU) at a ratio of 1:1 (US$1.1743 at the time). Physical euro coins and banknotes entered into circulation on 1 January 2002, making it the day-to-day operating currency of its original members, and by March 2002 it had completely replaced the former currencies.", "title": "" }, { "paragraph_id": 4, "text": "Between December 1999 and December 2002, the euro traded below the US dollar, but has since traded near parity with or above the US dollar, peaking at US$1.60 on 18 July 2008 and since then returning near to its original issue rate. On 13 July 2022, the two currencies hit parity for the first time in nearly two decades due in part to the 2022 Russian invasion of Ukraine.", "title": "" }, { "paragraph_id": 5, "text": "The euro is managed and administered by the European Central Bank (ECB, Frankfurt am Main) and the Eurosystem, composed of the central banks of the eurozone countries. As an independent central bank, the ECB has sole authority to set monetary policy. The Eurosystem participates in the printing, minting and distribution of notes and coins in all member states, and the operation of the eurozone payment systems.", "title": "Characteristics" }, { "paragraph_id": 6, "text": "The 1992 Maastricht Treaty obliges most EU member states to adopt the euro upon meeting certain monetary and budgetary convergence criteria, although not all participating states have done so. Denmark has negotiated exemptions, while Sweden (which joined the EU in 1995, after the Maastricht Treaty was signed) turned down the euro in a non-binding referendum in 2003, and has circumvented the obligation to adopt the euro by not meeting the monetary and budgetary requirements. All nations that have joined the EU since 1993 have pledged to adopt the euro in due course. The Maastricht Treaty was later amended by the Treaty of Nice, which closed the gaps and loopholes in the Maastricht and Rome Treaties.", "title": "Characteristics" }, { "paragraph_id": 7, "text": "The 20 participating members are", "title": "Characteristics" }, { "paragraph_id": 8, "text": "The EU member states not in the Eurozone are Bulgaria, Czech Republic, Denmark, Hungary, Poland, Romania, and Sweden.", "title": "Characteristics" }, { "paragraph_id": 9, "text": "The government of Bulgaria aims to replace the Bulgarian lev by the euro on 1 January 2025.", "title": "Characteristics" }, { "paragraph_id": 10, "text": "The government of Romania aims for the Romanian leu to be replaced by the euro on 1 January 2026.", "title": "Characteristics" }, { "paragraph_id": 11, "text": "EU members Czech Republic, Hungary, Poland, and Sweden are legally obligated to adopt the euro eventually, though they have no required date for adoption, and their governments do not currently have any plans for switching. Denmark negotiated for the right to retain its currency.", "title": "Characteristics" }, { "paragraph_id": 12, "text": "Microstates with a monetary agreement:", "title": "Characteristics" }, { "paragraph_id": 13, "text": "EU special territories", "title": "Characteristics" }, { "paragraph_id": 14, "text": "British Overseas Territory", "title": "Characteristics" }, { "paragraph_id": 15, "text": "Unilateral adopters", "title": "Characteristics" }, { "paragraph_id": 16, "text": "The currency of a number of states is pegged to the euro. These states are:", "title": "Characteristics" }, { "paragraph_id": 17, "text": "The euro is divided into 100 cents (also referred to as euro cents, especially when distinguishing them from other currencies, and referred to as such on the common side of all cent coins). In Community legislative acts the plural forms of euro and cent are spelled without the s, notwithstanding normal English usage. Otherwise, normal English plurals are used, with many local variations such as centime in France.", "title": "Coins and banknotes" }, { "paragraph_id": 18, "text": "All circulating coins have a common side showing the denomination or value, and a map in the background. Due to the linguistic plurality in the European Union, the Latin alphabet version of euro is used (as opposed to the less common Greek or Cyrillic) and Arabic numerals (other text is used on national sides in national languages, but other text on the common side is avoided). For the denominations except the 1-, 2- and 5-cent coins, the map only showed the 15 member states which were members when the euro was introduced. Beginning in 2007 or 2008 (depending on the country), the old map was replaced by a map of Europe also showing countries outside the EU. The 1-, 2- and 5-cent coins, however, keep their old design, showing a geographical map of Europe with the 15 member states of 2002 raised somewhat above the rest of the map. All common sides were designed by Luc Luycx. The coins also have a national side showing an image specifically chosen by the country that issued the coin. Euro coins from any member state may be freely used in any nation that has adopted the euro.", "title": "Coins and banknotes" }, { "paragraph_id": 19, "text": "The coins are issued in denominations of €2, €1, 50c, 20c, 10c, 5c, 2c, and 1c. To avoid the use of the two smallest coins, some cash transactions are rounded to the nearest five cents in the Netherlands and Ireland (by voluntary agreement) and in Finland and Italy (by law). This practice is discouraged by the commission, as is the practice of certain shops of refusing to accept high-value euro notes.", "title": "Coins and banknotes" }, { "paragraph_id": 20, "text": "Commemorative coins with €2 face value have been issued with changes to the design of the national side of the coin. These include both commonly issued coins, such as the €2 commemorative coin for the fiftieth anniversary of the signing of the Treaty of Rome, and nationally issued coins, such as the coin to commemorate the 2004 Summer Olympics issued by Greece. These coins are legal tender throughout the eurozone. Collector coins with various other denominations have been issued as well, but these are not intended for general circulation, and they are legal tender only in the member state that issued them.", "title": "Coins and banknotes" }, { "paragraph_id": 21, "text": "A number of institutions are authorised to mint euro coins:", "title": "Coins and banknotes" }, { "paragraph_id": 22, "text": "The design for the euro banknotes has common designs on both sides. The design was created by the Austrian designer Robert Kalina. Notes are issued in €500, €200, €100, €50, €20, €10, and €5. Each banknote has its own colour and is dedicated to an artistic period of European architecture. The front of the note features windows or gateways while the back has bridges, symbolising links between states in the union and with the future. While the designs are supposed to be devoid of any identifiable characteristics, the initial designs by Robert Kalina were of specific bridges, including the Rialto and the Pont de Neuilly, and were subsequently rendered more generic; the final designs still bear very close similarities to their specific prototypes; thus they are not truly generic. The monuments looked similar enough to different national monuments to please everyone.", "title": "Coins and banknotes" }, { "paragraph_id": 23, "text": "The Europa series, or second series, consists of six denominations and no longer includes the €500 with issuance discontinued as of 27 April 2019. However, both the first and the second series of euro banknotes, including the €500, remain legal tender throughout the euro area.", "title": "Coins and banknotes" }, { "paragraph_id": 24, "text": "In December 2021, the ECB announced its plans to redesign euro banknotes by 2024. A theme advisory group, made up of one member from each euro area country, was selected to submit theme proposals to the ECB. The proposals will be voted on by the public; a design competition will also be held.", "title": "Coins and banknotes" }, { "paragraph_id": 25, "text": "Since 1 January 2002, the national central banks (NCBs) and the ECB have issued euro banknotes on a joint basis. Eurosystem NCBs are required to accept euro banknotes put into circulation by other Eurosystem members and these banknotes are not repatriated. The ECB issues 8% of the total value of banknotes issued by the Eurosystem. In practice, the ECB's banknotes are put into circulation by the NCBs, thereby incurring matching liabilities vis-à-vis the ECB. These liabilities carry interest at the main refinancing rate of the ECB. The other 92% of euro banknotes are issued by the NCBs in proportion to their respective shares of the ECB capital key, calculated using national share of European Union (EU) population and national share of EU GDP, equally weighted.", "title": "Coins and banknotes" }, { "paragraph_id": 26, "text": "Member states are authorised to print or to commission bank note printing. As of November 2022, these are the printers:", "title": "Coins and banknotes" }, { "paragraph_id": 27, "text": "Capital within the EU may be transferred in any amount from one state to another. All intra-Union transfers in euro are treated as domestic transactions and bear the corresponding domestic transfer costs. This includes all member states of the EU, even those outside the eurozone providing the transactions are carried out in euro. Credit/debit card charging and ATM withdrawals within the eurozone are also treated as domestic transactions; however paper-based payment orders, like cheques, have not been standardised so these are still domestic-based. The ECB has also set up a clearing system, TARGET, for large euro transactions.", "title": "Coins and banknotes" }, { "paragraph_id": 28, "text": "The euro was established by the provisions in the 1992 Maastricht Treaty. To participate in the currency, member states are meant to meet strict criteria, such as a budget deficit of less than 3% of their GDP, a debt ratio of less than 60% of GDP (both of which were ultimately widely flouted after introduction), low inflation, and interest rates close to the EU average. In the Maastricht Treaty, the United Kingdom and Denmark were granted exemptions per their request from moving to the stage of monetary union which resulted in the introduction of the euro.", "title": "History" }, { "paragraph_id": 29, "text": "The name \"euro\" was officially adopted in Madrid on 16 December 1995. Belgian Esperantist Germain Pirlot, a former teacher of French and history, is credited with naming the new currency by sending a letter to then President of the European Commission, Jacques Santer, suggesting the name \"euro\" on 4 August 1995.", "title": "History" }, { "paragraph_id": 30, "text": "Due to differences in national conventions for rounding and significant digits, all conversion between the national currencies had to be carried out using the process of triangulation via the euro. The definitive values of one euro in terms of the exchange rates at which the currency entered the euro are shown in the table.", "title": "History" }, { "paragraph_id": 31, "text": "The rates were determined by the Council of the European Union, based on a recommendation from the European Commission based on the market rates on 31 December 1998. They were set so that one European Currency Unit (ECU) would equal one euro. The European Currency Unit was an accounting unit used by the EU, based on the currencies of the member states; it was not a currency in its own right. They could not be set earlier, because the ECU depended on the closing exchange rate of the non-euro currencies (principally pound sterling) that day.", "title": "History" }, { "paragraph_id": 32, "text": "The procedure used to fix the conversion rate between the Greek drachma and the euro was different since the euro by then was already two years old. While the conversion rates for the initial eleven currencies were determined only hours before the euro was introduced, the conversion rate for the Greek drachma was fixed several months beforehand.", "title": "History" }, { "paragraph_id": 33, "text": "The currency was introduced in non-physical form (traveller's cheques, electronic transfers, banking, etc.) at midnight on 1 January 1999, when the national currencies of participating countries (the eurozone) ceased to exist independently. Their exchange rates were locked at fixed rates against each other. The euro thus became the successor to the European Currency Unit (ECU). The notes and coins for the old currencies, however, continued to be used as legal tender until new euro notes and coins were introduced on 1 January 2002.", "title": "History" }, { "paragraph_id": 34, "text": "The changeover period during which the former currencies' notes and coins were exchanged for those of the euro lasted about two months, until 28 February 2002. The official date on which the national currencies ceased to be legal tender varied from member state to member state. The earliest date was in Germany, where the mark officially ceased to be legal tender on 31 December 2001, though the exchange period lasted for two months more. Even after the old currencies ceased to be legal tender, they continued to be accepted by national central banks for periods ranging from several years to indefinitely (the latter for Austria, Germany, Ireland, Estonia and Latvia in banknotes and coins, and for Belgium, Luxembourg, Slovenia and Slovakia in banknotes only). The earliest coins to become non-convertible were the Portuguese escudos, which ceased to have monetary value after 31 December 2002, although banknotes remained exchangeable until 2022.", "title": "History" }, { "paragraph_id": 35, "text": "A special euro currency sign (€) was designed after a public survey had narrowed ten of the original thirty proposals down to two. The President of the European Commission at the time (Jacques Santer) and the European Commissioner with responsibility for the euro (Yves-Thibault de Silguy) then chose the winning design.", "title": "History" }, { "paragraph_id": 36, "text": "Regarding the symbol, the European Commission stated on behalf of the European Union:", "title": "History" }, { "paragraph_id": 37, "text": "The symbol € is based on the Greek letter epsilon (Є), with the first letter in the word \"Europe\" and with 2 parallel lines signifying stability.", "title": "History" }, { "paragraph_id": 38, "text": "The European Commission also specified a euro logo with exact proportions. Placement of the currency sign relative to the numeric amount varies from state to state, but for texts in English published by EU institutions, the symbol (or the ISO-standard \"EUR\") should precede the amount.", "title": "History" }, { "paragraph_id": 39, "text": "Following the U.S. financial crisis in 2008, fears of a sovereign debt crisis developed in 2009 among investors concerning some European states, with the situation becoming particularly tense in early 2010. Greece was most acutely affected, but fellow Eurozone members Cyprus, Ireland, Italy, Portugal, and Spain were also significantly affected. All these countries used EU funds except Italy, which is a major donor to the EFSF. To be included in the eurozone, countries had to fulfil certain convergence criteria, but the meaningfulness of such criteria was diminished by the fact it was not enforced with the same level of strictness among countries.", "title": "History" }, { "paragraph_id": 40, "text": "According to the Economist Intelligence Unit in 2011, \"[I]f the [euro area] is treated as a single entity, its [economic and fiscal] position looks no worse and in some respects, rather better than that of the US or the UK\" and the budget deficit for the euro area as a whole is much lower and the euro area's government debt/GDP ratio of 86% in 2010 was about the same level as that of the United States. \"Moreover\", they write, \"private-sector indebtedness across the euro area as a whole is markedly lower than in the highly leveraged Anglo-Saxon economies\". The authors conclude that the crisis \"is as much political as economic\" and the result of the fact that the euro area lacks the support of \"institutional paraphernalia (and mutual bonds of solidarity) of a state\".", "title": "History" }, { "paragraph_id": 41, "text": "The crisis continued with S&P downgrading the credit rating of nine euro-area countries, including France, then downgrading the entire European Financial Stability Facility (EFSF) fund.", "title": "History" }, { "paragraph_id": 42, "text": "A historical parallel – to 1931 when Germany was burdened with debt, unemployment and austerity while France and the United States were relatively strong creditors – gained attention in summer 2012 even as Germany received a debt-rating warning of its own.", "title": "History" }, { "paragraph_id": 43, "text": "The euro is the sole currency of 20 EU member states: Austria, Belgium, Croatia, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Portugal, Slovakia, Slovenia, and Spain. These countries constitute the \"eurozone\", some 347 million people in total as of 2023. According to bilateral agreements with the EU, the euro has also been designated as the sole and official currency in a further four European microstates awarded minting rights (Andorra, Monaco, San Marino and the Vatican City). With all but one (Denmark) EU members obliged to join when economic conditions permit, together with future members of the EU, the enlargement of the eurozone is set to continue.", "title": "Direct and indirect usage" }, { "paragraph_id": 44, "text": "The euro is also the sole currency in three overseas territories of France that are not themselves part of the EU, namely Saint Barthélemy, Saint Pierre and Miquelon, and the French Southern and Antarctic Lands, as well as in the British Overseas Territory of Akrotiri and Dhekelia.", "title": "Direct and indirect usage" }, { "paragraph_id": 45, "text": "The euro has been adopted unilaterally as the sole currency of Montenegro and Kosovo. It has also been used as a foreign trading currency in Cuba since 1998, Syria since 2006, and Venezuela since 2018. In 2009, Zimbabwe abandoned its local currency and introduced major global convertible currencies instead, including the euro and the United States dollar. The direct usage of the euro outside of the official framework of the EU affects nearly 3 million people.", "title": "Direct and indirect usage" }, { "paragraph_id": 46, "text": "Outside the eurozone, two EU member states have currencies that are pegged to the euro, which is a precondition to joining the eurozone. The Danish krone and Bulgarian lev are pegged due to their participation in the ERM II.", "title": "Direct and indirect usage" }, { "paragraph_id": 47, "text": "Additionally, a total of 21 countries and territories that do not belong to the EU have currencies that are directly pegged to the euro including 14 countries in mainland Africa (CFA franc), two African island countries (Comorian franc and Cape Verdean escudo), three French Pacific territories (CFP franc) and two Balkan countries, Bosnia and Herzegovina (Bosnia and Herzegovina convertible mark) and North Macedonia (Macedonian denar). On 1 January 2010, the dobra of São Tomé and Príncipe was officially linked with the euro. Additionally, the Moroccan dirham is tied to a basket of currencies, including the euro and the US dollar, with the euro given the highest weighting.", "title": "Direct and indirect usage" }, { "paragraph_id": 48, "text": "These countries generally had previously implemented a currency peg to one of the major European currencies (e.g. the French franc, Deutsche Mark or Portuguese escudo), and when these currencies were replaced by the euro their currencies became pegged to the euro. Pegging a country's currency to a major currency is regarded as a safety measure, especially for currencies of areas with weak economies, as the euro is seen as a stable currency, prevents runaway inflation, and encourages foreign investment due to its stability.", "title": "Direct and indirect usage" }, { "paragraph_id": 49, "text": "In total, as of 2013, 182 million people in Africa use a currency pegged to the euro, 27 million people outside the eurozone in Europe, and another 545,000 people on Pacific islands.", "title": "Direct and indirect usage" }, { "paragraph_id": 50, "text": "Since 2005, stamps issued by the Sovereign Military Order of Malta have been denominated in euros, although the Order's official currency remains the Maltese scudo. The Maltese scudo itself is pegged to the euro and is only recognised as legal tender within the Order.", "title": "Direct and indirect usage" }, { "paragraph_id": 51, "text": "Since its introduction in 1999, the euro has been the second most widely held international reserve currency after the U.S. dollar. The share of the euro as a reserve currency increased from 18% in 1999 to 27% in 2008. Over this period, the share held in U.S. dollar fell from 71% to 64% and that held in RMB fell from 6.4% to 3.3%. The euro inherited and built on the status of the Deutsche Mark as the second most important reserve currency. The euro remains underweight as a reserve currency in advanced economies while overweight in emerging and developing economies: according to the International Monetary Fund the total of euro held as a reserve in the world at the end of 2008 was equal to $1.1 trillion or €850 billion, with a share of 22% of all currency reserves in advanced economies, but a total of 31% of all currency reserves in emerging and developing economies.", "title": "Direct and indirect usage" }, { "paragraph_id": 52, "text": "The possibility of the euro becoming the first international reserve currency has been debated among economists. Former Federal Reserve Chairman Alan Greenspan gave his opinion in September 2007 that it was \"absolutely conceivable that the euro will replace the US dollar as reserve currency, or will be traded as an equally important reserve currency\". In contrast to Greenspan's 2007 assessment, the euro's increase in the share of the worldwide currency reserve basket has slowed considerably since 2007 and since the beginning of the worldwide credit crunch related recession and European sovereign-debt crisis.", "title": "Direct and indirect usage" }, { "paragraph_id": 53, "text": "In economics, an optimum currency area, or region (OCA or OCR), is a geographical region in which it would maximise economic efficiency to have the entire region share a single currency. There are two models, both proposed by Robert Mundell: the stationary expectations model and the international risk sharing model. Mundell himself advocates the international risk sharing model and thus concludes in favour of the euro. However, even before the creation of the single currency, there were concerns over diverging economies. Before the late-2000s recession it was considered unlikely that a state would leave the euro or the whole zone would collapse. However the Greek government-debt crisis led to former British Foreign Secretary Jack Straw claiming the eurozone could not last in its current form. Part of the problem seems to be the rules that were created when the euro was set up. John Lanchester, writing for The New Yorker, explains it:", "title": "Economics" }, { "paragraph_id": 54, "text": "The guiding principle of the currency, which opened for business in 1999, were supposed to be a set of rules to limit a country's annual deficit to three per cent of gross domestic product, and the total accumulated debt to sixty per cent of G.D.P. It was a nice idea, but by 2004 the two biggest economies in the euro zone, Germany and France, had broken the rules for three years in a row.", "title": "Economics" }, { "paragraph_id": 55, "text": "The most obvious benefit of adopting a single currency is to remove the cost of exchanging currency, theoretically allowing businesses and individuals to consummate previously unprofitable trades. For consumers, banks in the eurozone must charge the same for intra-member cross-border transactions as purely domestic transactions for electronic payments (e.g., credit cards, debit cards and cash machine withdrawals).", "title": "Economics" }, { "paragraph_id": 56, "text": "Financial markets on the continent are expected to be far more liquid and flexible than they were in the past. The reduction in cross-border transaction costs will allow larger banking firms to provide a wider array of banking services that can compete across and beyond the eurozone. However, although transaction costs were reduced, some studies have shown that risk aversion has increased during the last 40 years in the Eurozone.", "title": "Economics" }, { "paragraph_id": 57, "text": "Another effect of the common European currency is that differences in prices—in particular in price levels—should decrease because of the law of one price. Differences in prices can trigger arbitrage, i.e., speculative trade in a commodity across borders purely to exploit the price differential. Therefore, prices on commonly traded goods are likely to converge, causing inflation in some regions and deflation in others during the transition. Some evidence of this has been observed in specific eurozone markets.", "title": "Economics" }, { "paragraph_id": 58, "text": "Before the introduction of the euro, some countries had successfully contained inflation, which was then seen as a major economic problem, by establishing largely independent central banks. One such bank was the Bundesbank in Germany; the European Central Bank was modelled on the Bundesbank.", "title": "Economics" }, { "paragraph_id": 59, "text": "The euro has come under criticism due to its regulation, lack of flexibility and rigidity towards sharing member states on issues such as nominal interest rates. Many national and corporate bonds denominated in euro are significantly more liquid and have lower interest rates than was historically the case when denominated in national currencies. While increased liquidity may lower the nominal interest rate on the bond, denominating the bond in a currency with low levels of inflation arguably plays a much larger role. A credible commitment to low levels of inflation and a stable debt reduces the risk that the value of the debt will be eroded by higher levels of inflation or default in the future, allowing debt to be issued at a lower nominal interest rate.", "title": "Economics" }, { "paragraph_id": 60, "text": "There is also a cost in structurally keeping inflation lower than in the United States, United Kingdom, and China. The result is that seen from those countries, the euro has become expensive, making European products increasingly expensive for its largest importers; hence export from the eurozone becomes more difficult.", "title": "Economics" }, { "paragraph_id": 61, "text": "In general, those in Europe who own large amounts of euro are served by high stability and low inflation.", "title": "Economics" }, { "paragraph_id": 62, "text": "A monetary union means states in that union lose the main mechanism of recovery of their international competitiveness by weakening (depreciating) their currency. When wages become too high compared to productivity in the exports sector, then these exports become more expensive and they are crowded out from the market within a country and abroad. This drives the fall of employment and output in the exports sector and fall of trade and current account balances. Fall of output and employment in the tradable goods sector may be offset by the growth of non-exports sectors, especially in construction and services. Increased purchases abroad and negative current account balances can be financed without a problem as long as credit is cheap. The need to finance trade deficit weakens currency, making exports automatically more attractive in a country and abroad. A state in a monetary union cannot use weakening of currency to recover its international competitiveness. To achieve this a state has to reduce prices, including wages (deflation). This could result in high unemployment and lower incomes as it was during the European sovereign-debt crisis.", "title": "Economics" }, { "paragraph_id": 63, "text": "The euro increased price transparency and stimulated cross-border trade. A 2009 consensus from the studies of the introduction of the euro concluded that it has increased trade within the eurozone by 5% to 10%, although one study suggested an increase of only 3% while another estimated 9 to 14%. However, a meta-analysis of all available studies suggests that the prevalence of positive estimates is caused by publication bias and that the underlying effect may be negligible. Although a more recent meta-analysis shows that publication bias decreases over time and that there are positive trade effects from the introduction of the euro, as long as results from before 2010 are taken into account. This may be because of the inclusion of the Financial crisis of 2007–2008 and ongoing integration within the EU. Furthermore, older studies based on certain methods of analysis of main trends reflecting general cohesion policies in Europe that started before, and continue after implementing the common currency find no effect on trade. These results suggest that other policies aimed at European integration might be the source of observed increase in trade. According to Barry Eichengreen, studies disagree on the magnitude of the effect of the euro on trade, but they agree that it did have an effect.", "title": "Economics" }, { "paragraph_id": 64, "text": "Physical investment seems to have increased by 5% in the eurozone due to the introduction. Regarding foreign direct investment, a study found that the intra-eurozone FDI stocks have increased by about 20% during the first four years of the EMU. Concerning the effect on corporate investment, there is evidence that the introduction of the euro has resulted in an increase in investment rates and that it has made it easier for firms to access financing in Europe. The euro has most specifically stimulated investment in companies that come from countries that previously had weak currencies. A study found that the introduction of the euro accounts for 22% of the investment rate after 1998 in countries that previously had a weak currency.", "title": "Economics" }, { "paragraph_id": 65, "text": "The introduction of the euro has led to extensive discussion about its possible effect on inflation. In the short term, there was a widespread impression in the population of the eurozone that the introduction of the euro had led to an increase in prices, but this impression was not confirmed by general indices of inflation and other studies. A study of this paradox found that this was due to an asymmetric effect of the introduction of the euro on prices: while it had no effect on most goods, it had an effect on cheap goods which have seen their price round up after the introduction of the euro. The study found that consumers based their beliefs on inflation of those cheap goods which are frequently purchased. It has also been suggested that the jump in small prices may be because prior to the introduction, retailers made fewer upward adjustments and waited for the introduction of the euro to do so.", "title": "Economics" }, { "paragraph_id": 66, "text": "One of the advantages of the adoption of a common currency is the reduction of the risk associated with changes in currency exchange rates. It has been found that the introduction of the euro created \"significant reductions in market risk exposures for nonfinancial firms both in and outside Europe\". These reductions in market risk \"were concentrated in firms domiciled in the eurozone and in non-euro firms with a high fraction of foreign sales or assets in Europe\".", "title": "Economics" }, { "paragraph_id": 67, "text": "The introduction of the euro increased European financial integration, which helped stimulate growth of a European securities market (bond markets are characterized by economies of scale dynamics). According to a study on this question, it has \"significantly reshaped the European financial system, especially with respect to the securities markets [...] However, the real and policy barriers to integration in the retail and corporate banking sectors remain significant, even if the wholesale end of banking has been largely integrated.\" Specifically, the euro has significantly decreased the cost of trade in bonds, equity, and banking assets within the eurozone. On a global level, there is evidence that the introduction of the euro has led to an integration in terms of investment in bond portfolios, with eurozone countries lending and borrowing more between each other than with other countries. Financial integration made it cheaper for European companies to borrow. Banks, firms and households could also invest more easily outside of their own country, thus creating greater international risk-sharing.", "title": "Economics" }, { "paragraph_id": 68, "text": "As of January 2014, and since the introduction of the euro, interest rates of most member countries (particularly those with a weak currency) have decreased. Some of these countries had the most serious sovereign financing problems.", "title": "Economics" }, { "paragraph_id": 69, "text": "The effect of declining interest rates, combined with excess liquidity continually provided by the ECB, made it easier for banks within the countries in which interest rates fell the most, and their linked sovereigns, to borrow significant amounts (above the 3% of GDP budget deficit imposed on the eurozone initially) and significantly inflate their public and private debt levels. Following the financial crisis of 2007–2008, governments in these countries found it necessary to bail out or nationalise their privately held banks to prevent systemic failure of the banking system when underlying hard or financial asset values were found to be grossly inflated and sometimes so nearly worthless there was no liquid market for them. This further increased the already high levels of public debt to a level the markets began to consider unsustainable, via increasing government bond interest rates, producing the ongoing European sovereign-debt crisis.", "title": "Economics" }, { "paragraph_id": 70, "text": "The evidence on the convergence of prices in the eurozone with the introduction of the euro is mixed. Several studies failed to find any evidence of convergence following the introduction of the euro after a phase of convergence in the early 1990s. Other studies have found evidence of price convergence, in particular for cars. A possible reason for the divergence between the different studies is that the processes of convergence may not have been linear, slowing down substantially between 2000 and 2003, and resurfacing after 2003 as suggested by a recent study (2009).", "title": "Economics" }, { "paragraph_id": 71, "text": "A study suggests that the introduction of the euro has had a positive effect on the amount of tourist travel within the EMU, with an increase of 6.5%.", "title": "Economics" }, { "paragraph_id": 72, "text": "The ECB targets interest rates rather than exchange rates and in general, does not intervene on the foreign exchange rate markets. This is because of the implications of the Mundell–Fleming model, which implies a central bank cannot (without capital controls) maintain interest rate and exchange rate targets simultaneously, because increasing the money supply results in a depreciation of the currency. In the years following the Single European Act, the EU has liberalised its capital markets and, as the ECB has inflation targeting as its monetary policy, the exchange-rate regime of the euro is floating.", "title": "Exchange rates" }, { "paragraph_id": 73, "text": "The euro is the second-most widely held reserve currency after the U.S. dollar. After its introduction on 4 January 1999 its exchange rate against the other major currencies fell reaching its lowest exchange rates in 2000 (3 May vs sterling, 25 October vs the U.S. dollar, 26 October vs Japanese yen). Afterwards it regained and its exchange rate reached its historical highest point in 2008 (15 July vs US dollar, 23 July vs Japanese yen, 29 December vs sterling). With the advent of the global financial crisis the euro initially fell, to regain later. Despite pressure due to the European sovereign-debt crisis the euro remained stable. In November 2011 the euro's exchange rate index – measured against currencies of the bloc's major trading partners – was trading almost two percent higher on the year, approximately at the same level as it was before the crisis kicked off in 2007. In mid July, 2022, the euro equalled the US dollar for a short period of time.", "title": "Exchange rates" }, { "paragraph_id": 74, "text": "Besides the economic motivations to the introduction of the euro, its creation was also partly justified as a way to foster a closer sense of joint identity between European citizens. Statements about this goal were for instance made by Wim Duisenberg, European Central Bank Governor, in 1998, Laurent Fabius, French Finance Minister, in 2000, and Romano Prodi, President of the European Commission, in 2002. However, 15 years after the introduction of the euro, a study found no evidence that it has had any effect on a shared sense of European identity.", "title": "Political considerations" }, { "paragraph_id": 75, "text": "The formal titles of the currency are euro for the major unit and cent for the minor (one-hundredth) unit and for official use in most eurozone languages; according to the ECB, all languages should use the same spelling for the nominative singular. This may contradict normal rules for word formation in some languages.", "title": "Euro in various official EU languages" }, { "paragraph_id": 76, "text": "Bulgaria has negotiated an exception; euro in the Bulgarian Cyrillic alphabet is spelled eвро (evro) and not eуро (euro) in all official documents. In the Greek script the term ευρώ (evró) is used; the Greek \"cent\" coins are denominated in λεπτό/ά (leptó/á). Official practice for English-language EU legislation is to use the words euro and cent as both singular and plural, although the European Commission's Directorate-General for Translation states that the plural forms euros and cents should be used in English. The word 'euro' is pronounced differently according to pronunciation rules in the individual languages applied; in German [ˈɔʏʁo], in English /ˈjʊəroʊ/, in French [øʁo], etc.", "title": "Euro in various official EU languages" }, { "paragraph_id": 77, "text": "In summary:", "title": "Euro in various official EU languages" }, { "paragraph_id": 78, "text": "For local phonetics, cent, use of plural and amount formatting (€6,00 or 6.00 €), see Language and the euro.", "title": "Euro in various official EU languages" } ]
The euro is the official currency of 20 of the 27 member states of the European Union. This group of states is officially known as the euro area or, commonly, the eurozone, and includes about 344 million citizens as of 2023. The euro is divided into 100 euro cents. The currency is also used officially by the institutions of the European Union, by four European microstates that are not EU members, the British Overseas Territory of Akrotiri and Dhekelia, as well as unilaterally by Montenegro and Kosovo. Outside Europe, a number of special territories of EU members also use the euro as their currency. Additionally, over 200 million people worldwide use currencies pegged to the euro. The euro is the second-largest reserve currency as well as the second-most traded currency in the world after the United States dollar. As of December 2019, with more than €1.3 trillion in circulation, the euro has one of the highest combined values of banknotes and coins in circulation in the world. The name euro was officially adopted on 16 December 1995 in Madrid. The euro was introduced to world financial markets as an accounting currency on 1 January 1999, replacing the former European Currency Unit (ECU) at a ratio of 1:1. Physical euro coins and banknotes entered into circulation on 1 January 2002, making it the day-to-day operating currency of its original members, and by March 2002 it had completely replaced the former currencies. Between December 1999 and December 2002, the euro traded below the US dollar, but has since traded near parity with or above the US dollar, peaking at US$1.60 on 18 July 2008 and since then returning near to its original issue rate. On 13 July 2022, the two currencies hit parity for the first time in nearly two decades due in part to the 2022 Russian invasion of Ukraine.
2001-10-09T15:16:08Z
2023-12-31T13:49:46Z
[ "Template:Infobox currency", "Template:Div col", "Template:Script", "Template:IPA", "Template:IPA-bg", "Template:Euro topics", "Template:Transliteration", "Template:Cite book", "Template:Short description", "Template:Legend", "Template:As of", "Template:Div col end", "Template:IPA-mt", "Template:Notelist", "Template:Main", "Template:IPA-fr", "Template:IPA-cs", "Template:IPA-es", "Template:Lang", "Template:IPA-el", "Template:IPA-hu", "Template:Portal bar", "Template:Sister project links", "Template:Good article", "Template:IPA-sk", "Template:IPA-lv", "Template:NoteFoot", "Template:Webarchive", "Template:Navboxes", "Template:Cite web", "Template:Hatgrp", "Template:NoteTag", "Template:ISO 4217", "Template:See also", "Template:Use dmy dates", "Template:IPA-fi", "Template:Reflist", "Template:Blockquote", "Template:IPA-it", "Template:Authority control", "Template:Use British English", "Template:Anchor", "Template:Needs IPA", "Template:Cite news", "Template:EUnum", "Template:Eurozone labelled map interior", "Template:Exchange Rate", "Template:IPA-et", "Template:IPA-de", "Template:IPA-pt", "Template:IPA-pl", "Template:IPA-sl", "Template:Euro adoption past", "Template:IPAc-en", "Template:IPA-da", "Template:IPA-nl", "Template:Multiple image", "Template:Cite journal", "Template:Legend-line", "Template:Clear", "Template:Further", "Template:Most traded currencies" ]
https://en.wikipedia.org/wiki/Euro
9,474
European Central Bank
The European Central Bank (ECB) is the prime component of the Eurosystem and the European System of Central Banks (ESCB) as well as one of seven institutions of the European Union. It is one of the world's most important central banks. The ECB Governing Council makes monetary policy for the Eurozone and the European Union, administers the foreign exchange reserves of EU member states, engages in foreign exchange operations, and defines the intermediate monetary objectives and key interest rate of the EU. The ECB Executive Board enforces the policies and decisions of the Governing Council, and may direct the national central banks when doing so. The ECB has the exclusive right to authorise the issuance of euro banknotes. Member states can issue euro coins, but the volume must be approved by the ECB beforehand. The bank also operates the TARGET2 payments system. The ECB was established by the Treaty of Amsterdam in May 1999 with the purpose of guaranteeing and maintaining price stability. On 1 December 2009, the Treaty of Lisbon became effective and the bank gained the official status of an EU institution. When the ECB was created, it covered a Eurozone of eleven members. Since then, Greece joined in January 2001, Slovenia in January 2007, Cyprus and Malta in January 2008, Slovakia in January 2009, Estonia in January 2011, Latvia in January 2014, Lithuania in January 2015 and Croatia in January 2023. The current President of the ECB is Christine Lagarde. Seated in Frankfurt, Germany, the bank formerly occupied the Eurotower prior to the construction of its new seat. The ECB is directly governed by European Union law. Its capital stock, worth €11 billion, is owned by all 27 central banks of the EU member states as shareholders. The initial capital allocation key was determined in 1998 on the basis of the states' population and GDP, but the capital key has been readjusted since. Shares in the ECB are not transferable and cannot be used as collateral. The European Central Bank is the de facto successor of the European Monetary Institute (EMI). The EMI was established at the start of the second stage of the EU's Economic and Monetary Union (EMU) to handle the transitional issues of states adopting the euro and prepare for the creation of the ECB and European System of Central Banks (ESCB). The EMI itself took over from the earlier European Monetary Cooperation Fund (EMCF). The ECB formally replaced the EMI on 1 June 1998 by virtue of the Treaty on European Union (TEU, Treaty of Maastricht), however it did not exercise its full powers until the introduction of the euro on 1 January 1999, signalling the third stage of EMU. The bank was the final institution needed for EMU, as outlined by the EMU reports of Pierre Werner and President Jacques Delors. It was established on 1 June 1998 The first President of the Bank was Wim Duisenberg, the former president of the Dutch central bank and the European Monetary Institute. While Duisenberg had been the head of the EMI (taking over from Alexandre Lamfalussy of Belgium) just before the ECB came into existence, the French government wanted Jean-Claude Trichet, former head of the French central bank, to be the ECB's first president. The French argued that since the ECB was to be located in Germany, its president should be French. This was opposed by the German, Dutch and Belgian governments who saw Duisenberg as a guarantor of a strong euro. Tensions were abated by a gentleman's agreement in which Duisenberg would stand down before the end of his mandate, to be replaced by Trichet. Trichet replaced Duisenberg as president in November 2003. Until 2007, the ECB had very successfully managed to maintain inflation close but below 2%. The European Central Bank underwent through a deep internal transformation as it faced the global financial crisis and the Eurozone debt crisis. The so-called European debt crisis began after Greece's new elected government uncovered the real level indebtedness and budget deficit and warned EU institutions of the imminent danger of a Greek sovereign default. Foreseeing a possible sovereign default in the eurozone, the general public, international and European institutions, and the financial community reassessed the economic situation and creditworthiness of some Eurozone member states. Consequently, sovereign bonds yields of several Eurozone countries started to rise sharply. This provoked a self-fulfilling panic on financial markets: the more Greek bonds yields rose, the more likely a default became possible, the more bond yields increased in turn. This panic was also aggravated because of the reluctance of the ECB to react and intervene on sovereign bond markets for two reasons. First, because the ECB's legal framework normally forbids the purchase of sovereign bonds in the primary market (Article 123. TFEU), An over-interpretation of this limitation, inhibited the ECB from implementing quantitative easing like the Federal Reserve and the Bank of England did as soon as 2008, which played an important role in stabilizing markets. Secondly, a decision by the ECB made in 2005 introduced a minimum credit rating (BBB-) for all Eurozone sovereign bonds to be eligible as collateral to the ECB's open market operations. This meant that if a private rating agencies were to downgrade a sovereign bond below that threshold, many banks would suddenly become illiquid because they would lose access to ECB refinancing operations. According to former member of the governing council of the ECB Athanasios Orphanides, this change in the ECB's collateral framework "planted the seed" of the euro crisis. Faced with those regulatory constraints, the ECB led by Jean-Claude Trichet in 2010 was reluctant to intervene to calm down financial markets. Up until 6 May 2010, Trichet formally denied at several press conferences the possibility of the ECB to embark into sovereign bonds purchases, even though Greece, Ireland, Portugal, Spain and Italy faced waves of credit rating downgrades and increasing interest rate spreads. In a remarkable u-turn, the ECB announced on 10 May 2010, the launch of a "Securities Market Programme" (SMP) which involved the discretionary purchase of sovereign bonds in secondary markets. Extraordinarily, the decision was taken by the Governing Council during a teleconference call only three days after the ECB's usual meeting of 6 May (when Trichet still denied the possibility of purchasing sovereign bonds). The ECB justified this decision by the necessity to "address severe tensions in financial markets." The decision also coincided with the EU leaders decision of 10 May to establish the European Financial Stabilisation mechanism, which would serve as a crisis fighting fund to safeguard the euro area from future sovereign debt crisis. Although at first limited to the debt of Greece, Ireland and Portugal, the bulk of the ECB's bond buying eventually consisted of Spanish and Italian debt. These purchases were intended to dampen international speculation against stressed countries, and thus avoid a contagion of the Greek crisis towards other Eurozone countries. The assumption—largely justified—was that speculative activity would decrease over time and the value of the assets increase. Although SMP purchases did inject liquidity into financial markets, all of these injections were "sterilized" through weekly liquidity absorption. So the operation was net neutral in liquidity terms (though this was of little practical importance since normal monetary policy operations were ensuring unlimited supplies of liquidity at the main policy interest rate). In September 2011, ECB's Board member Jürgen Stark, resigned in protest against the "Securities Market Programme" which involved the purchase of sovereign bonds from Southern member states, a move that he considered as equivalent to monetary financing, which is prohibited by the EU Treaty. The Financial Times Deutschland referred to this episode as "the end of the ECB as we know it", referring to its hitherto perceived "hawkish" stance on inflation and its historical Deutsche Bundesbank influence. As of 18 June 2012, the ECB in total had spent €212.1bn (equal to 2.2% of the Eurozone GDP) for bond purchases covering outright debt, as part of the Securities Markets Programme. Controversially, the ECB made substantial profits out of SMP, which were largely redistributed to Eurozone countries. In 2013, the Eurogroup decided to refund those profits to Greece, however, the payments were suspended from 2014 until 2017 over the conflict between Yanis Varoufakis and ministers of the Eurogroup. In 2018, profits refunds were reinstalled by the Eurogroup. However, several NGOs complained that a substantial part of the ECB profits would never be refunded to Greece. The ECB played a controversial role in the "Troika" by rejecting most forms of debt restructuring of public and bank debts, and pressing governments to adopt bailout programmes and structural reforms through secret letters to Italian, Spanish, Greek and Irish governments. It has further been accused of interfering in the Greek referendum of July 2015 by constraining liquidity to Greek commercial banks. In November 2010, reflecting the huge increase in borrowing, including the cover the cost of having guaranteed the liabilities of banks, the cost of borrowing in the private financial markets had become prohibitive for the Irish government. Although it had deferred the cash cost of recapitalising the failing Anglo Irish Bank by nationalising it and issuing it with a "promissory note" (an IOU), the Government also faced a large deficit on its non-banking activities, and it therefore turned to the official sector for a loan to bridge the shortfall until its finances were credibly back on a sustainable footing. (Meanwhile, Anglo used the promissory note as collateral for its emergency loan (ELA) from the Central Bank. This enabled Anglo was able to repay its depositors and bondholders. It became clear later that the ECB played a key role in making sure the Irish government did not let Anglo default on its debts, to avoid financial instability risks. On 15 October and 6 November 2010, the ECB President Jean-Claude Trichet sent two secret letters to the Irish finance Minister which essentially informed the Irish government of the possible suspension of ELA's credit lines, unless the government requested a financial assistance programme to the Eurogroup under the condition of further reforms and fiscal consolidation. In addition, the ECB insisted that no debt restructuring (or bail-in) should be applied to the nationalized banks' bondholders, a measure which could have saved Ireland 8 billion euros. During 2012, the ECB pressed for an early end to the ELA, and this situation was resolved with the liquidation of the successor institution IBRC in February 2013. The promissory note was exchanged for much longer term marketable floating rate notes which were disposed of by the Central Bank over the following decade. In April 2011, the ECB raised interest rates for the first time since 2008 from 1% to 1.25%, with a further increase to 1.50% in July 2011. However, in 2012–2013 the ECB sharply lowered interest rates to encourage economic growth, reaching the historically low 0.25% in November 2013. Soon after the rates were cut to 0.15%, then on 4 September 2014 the central bank reduced the rates by two-thirds from 0.15% to 0.05%. Recently, the interest rates were further reduced reaching 0.00%, the lowest rates on record. In a report adopted on 13 March 2014, the European Parliament criticized the "potential conflict of interest between the current role of the ECB in the Troika as 'technical advisor' and its position as a creditor of the four Member States, as well as its mandate under the Treaty". The report was led by Austrian right-wing MEP Othmar Karas and French Socialist MEP Liem Hoang Ngoc. On 1 November 2011, Mario Draghi replaced Jean-Claude Trichet as President of the ECB. This change in leadership also marks the start of a new era under which the ECB will become more and more interventionist and eventually ended the Eurozone sovereign debt crisis. Draghi's presidency started with the impressive launch of a new round of 1% interest loans with a term of three years (36 months) – the Long-term Refinancing operations (LTRO). Under this programme, 523 Banks tapped as much as €489.2 bn (US$640 bn). Observers were surprised by the volume of loans made when it was implemented. By far biggest amount of €325bn was tapped by banks in Greece, Ireland, Italy and Spain. Although those LTROs loans did not directly benefit EU governments, it effectively allowed banks to do a carry trade, by lending off the LTROs loans to governments with an interest margin. The operation also facilitated the rollover of €200bn of maturing bank debts in the first three months of 2012. Facing renewed fears about sovereigns in the eurozone continued Mario Draghi made a decisive speech in London, by declaring that the ECB "...is ready to do whatever it takes to preserve the Euro. And believe me, it will be enough." In light of slow political progress on solving the eurozone crisis, Draghi's statement has been seen as a key turning point in the eurozone crisis, as it was immediately welcomed by European leaders, and led to a steady decline in bond yields for eurozone countries, in particular Spain, Italy and France. Following up on Draghi's speech, on 6 September 2012 the ECB announced the Outright Monetary Transactions programme (OMT). Unlike the previous SMP programme, OMT has no ex-ante time or size limit. However, the activation of the purchases remains conditioned to the adherence by the benefitting country to an adjustment programme to the ESM. The program was adopted with near unanimity, the Bundesbank president Jens Weidmann being the sole member of the ECB's Governing Council to vote against it. Even if OMT was never actually implemented until today, it made the "Whatever it takes" pledge credible and significantly contributed to stabilizing financial markets and ending the sovereign debt crisis. According to various sources, the OMT programme and "whatever it takes" speeches were made possible because EU leaders previously agreed to build the banking union. In November 2014, the bank moved into its new premises, while the Eurotower building was dedicated to hosting the newly established supervisory activities of the ECB under European Banking Supervision. Although the sovereign debt crisis was almost solved by 2014, the ECB started to face a repeated decline in the Eurozone inflation rate, indicating that the economy was going towards a deflation. Responding to this threat, the ECB announced on 4 September 2014 the launch of two bond buying purchases programmes: the Covered Bond Purchasing Programme (CBPP3) and Asset-Backed Securities Programme (ABSPP). On 22 January 2015, the ECB announced an extension of those programmes within a full-fledge "quantitative easing" programme which also included sovereign bonds, to the tune of 60 billion euros per month up until at least September 2016. The programme was started on 9 March 2015. On 8 June 2016, the ECB added corporate bonds to its asset purchases portfolio with the launch of the corporate sector purchase programme (CSPP). Under this programme, it conducted the net purchase of corporate bonds until January 2019 to reach about €177 billion. While the programme was halted for 11 months in January 2019, the ECB restarted net purchases in November 2019. As of 2021, the size of the ECB's quantitative easing programme had reached 2947 billion euros. In July 2019, EU leaders nominated Christine Lagarde to replace Mario Draghi as ECB President. Lagarde resigned from her position as managing director of the International Monetary Fund in July 2019 and formally took over the ECB's presidency on 1 November 2019. Lagarde immediately signalled a change of style in the ECB's leadership. She embarked the ECB on a strategic review of the ECB's monetary policy strategy, an exercise the ECB had not done for 17 years. As part of this exercise, Lagarde committed the ECB to look into how monetary policy could contribute to address climate change, and promised that "no stone would be left unturned." The ECB president also adopted a change of communication style, in particular in her use of social media to promote gender equality, and by opening dialogue with civil society stakeholders. In March 2020, the ECB responded quickly and boldly by launching a package of measures including a new asset purchase programme: the €1,350 billion Pandemic Emergency Purchase Programme (PEPP) which aimed to lower borrowing costs and increase lending in the euro area. The PEPP was extended to cover an additional €500 billion in December 2020. The ECB also re-launched more TLTRO loans to banks at historically low levels and record-high take-up (€1.3 trillion in June 2020). Lending by banks to SMEs was also facilitated by collateral easing measures, and other supervisory relaxations. The ECB also reactivated currency swap lines and enhanced existing swap lines with central banks across the globe. As a consequence of the COVID-19 crisis, the ECB extended the duration of the strategy review until September 2021. On 13 July 2021, the ECB presented the outcomes of the strategy review, with the main following announcements: The ECB also said it would carry out another strategy review in 2025. The ECB has one primary objective – price stability – subject to which it may pursue secondary objectives. The primary objective of the European Central Bank, set out in Article 127(1) of the Treaty on the Functioning of the European Union, is to maintain price stability within the Eurozone. However the EU Treaties do not specify exactly how the ECB should pursue this objective. The European Central Bank has ample discretion over the way it pursues its price stability objective, as it can self-decide on the inflation target, and may also influence the way inflation is being measured. Since 2021, the ECB has defined its objective as targeting an inflation rate of 2%. Before that, the precise formulation of the price stability objective has changed over the years: The Governing Council in October 1998 defined price stability as inflation of under 2%, "a year-on-year increase in the Harmonised Index of Consumer Prices (HICP) for the euro area of below 2%" and added that price stability "was to be maintained over the medium term". In May 2003, following a thorough review of the ECB's monetary policy strategy, the Governing Council clarified that "in the pursuit of price stability, it aims to maintain inflation rates below, but close to, 2% over the medium term". In 2016, the European Central Bank's president has further adjusted its communication, by introducing the notion of "symmetry" in its definition of its target, thus making it clear that the ECB should respond both to inflationary pressure to deflationary pressures. As Draghi once said "symmetry meant not only that we would not accept persistently low inflation, but also that there was no cap on inflation at 2%." On 8 July 2021, as a result of the strategic review led by the new president Christine Lagarde, the ECB officially abandoned the "below but close to two per cent" definition and adopted instead a 2% symmetric target. Without prejudice to the objective of price stability, the Treaty (127 TFEU) also provides room for the ECB to pursue other objectives: Without prejudice to the objective of price stability, the ESCB shall support the general economic policies in the Union with a view to contributing to the achievement of the objectives of the Union as laid down in Article 3 of the Treaty on European Union. This legal provision is often considered to provide a "secondary mandate" to the ECB and offers ample justifications for the ECB to also prioritize other considerations such as full employment or environmental protection, which are mentioned in the Article 3 of the Treaty on the European Union. At the same time, economists and commentators are often divided on whether and how the ECB should pursue those secondary objectives, in particular the environmental impact. ECB official have also frequently pointed out the possible contradictions between those secondary objectives. To better guide the ECB's action on its secondary objectives, it has been suggested that closer consultation with the European Parliament would be warranted. In 2023, the ECB recognised the possible role of the European Parliament in the prioritisation of its secondary objectives. To carry out its main mission, the ECB's tasks include: The principal monetary policy tool of the European central bank is collateralised borrowing or repo agreements. The collateral used by the ECB is typically high quality public and private sector debt. All lending to credit institutions must be collateralised as required by Article 18 of the Statute of the ESCB. The criteria for determining "high quality" for public debt have been preconditions for membership in the European Union: total debt must not be too large in relation to a gross domestic product, for example, and deficits in any given year must not become too large. Though these criteria are fairly simple, a number of accounting techniques may hide the underlying reality of fiscal solvency—or the lack of the same. In the United States Federal Reserve Bank, the Federal Reserve buys assets: typically, bonds issued by the Federal government. There is no limit on the bonds that it can buy and one of the tools at its disposal in a financial crisis is to take such extraordinary measures as the purchase of large amounts of assets such as commercial paper. The purpose of such operations is to ensure that adequate liquidity is available for the functioning of the financial system. The Eurosystem, on the other hand, uses collateralized lending as a default instrument. There are about 1,500 eligible banks which may bid for short-term repo contracts. The difference is that banks in effect borrow cash from the ECB and must pay it back; the short durations allow interest rates to be adjusted continually. When the repo notes come due the participating banks bid again. An increase in the number of notes offered at auction allows an increase in liquidity in the economy. A decrease has the contrary effect. The contracts are carried on the asset side of the European Central Bank's balance sheet and the resulting deposits in member banks are carried as a liability. In layman's terms, the liability of the central bank is money, and an increase in deposits in member banks carried as a liability by the central bank, means that more money has been put into the economy. To qualify for participation in the auctions, banks must be able to offer proof of appropriate collateral in the form of loans to other entities. These can be the public debt of member states, but a fairly wide range of private banking securities are also accepted. The fairly stringent membership requirements for the European Union, especially with regard to sovereign debt as a percentage of each member state's gross domestic product, are designed to ensure that assets offered to the bank as collateral are, at least in theory, all equally good, and all equally protected from the risk of inflation. The ECB has four decision-making bodies, that take all the decisions with the objective of fulfilling the ECB's mandate: The Executive Board is responsible for the implementation of monetary policy (defined by the Governing Council) and the day-to-day running of the bank. It can issue decisions to national central banks and may also exercise powers delegated to it by the Governing Council. Executive Board members are assigned a portfolio of responsibilities by the President of the ECB. The executive board normally meets every Tuesday. It is composed of the President of the Bank (currently Christine Lagarde), the vice-president (currently Luis de Guindos) and four other members. They are all appointed by the European Council for non-renewable terms of eight years. Members of the executive board of the ECB are appointed "from among persons of recognised standing and professional experience in monetary or banking matters by common accord of the governments of the Member States at the level of Heads of State or Government, on a recommendation from the Council, after it has consulted the European Parliament and the Governing Council of the ECB". José Manuel González-Páramo, a Spanish member of the executive board since June 2004, was due to leave the board in early June 2012, but no replacement had been named as of late May. The Spanish had nominated Barcelona-born Antonio Sáinz de Vicuña – an ECB veteran who heads its legal department – as González-Páramo's replacement as early as January 2012, but alternatives from Luxembourg, Finland, and Slovenia were put forward and no decision made by May. After a long political battle and delays due to the European Parliament's protest over the lack of gender balance at the ECB, Luxembourg's Yves Mersch was appointed as González-Páramo's replacement. In December 2020, Frank Elderson succeeded to Yves Mersch at the ECB's board. The Governing Council is the main decision-making body of the Eurosystem. It comprises the members of the executive board (six in total) and the governors of the National Central Banks of the euro area countries (20 as of 2023). According to Article 284 of the TFEU, the President of the European Council and a representative from the European Commission may attend the meetings as observers, but they lack voting rights. Since January 2015, the ECB has published on its website a summary of the Governing Council deliberations ("accounts"). These publications came as a partial response to recurring criticism against the ECB's opacity. However, in contrast to other central banks, the ECB still does not disclose individual voting records of the governors seating in its council. The General Council is a body dealing with transitional issues of euro adoption, for example, fixing the exchange rates of currencies being replaced by the euro (continuing the tasks of the former EMI). It will continue to exist until all EU member states adopt the euro, at which point it will be dissolved. It is composed of the President and vice-president together with the governors of all of the EU's national central banks. The ECB Supervisory Board meets twice a month to discuss, plan and carry out the ECB's supervisory tasks. It proposes draft decisions to the Governing Council under the non-objection procedure. It is composed of Chair (appointed for a non-renewable term of five years), Vice-chair (chosen from among the members of the ECB's executive board) four ECB representatives and representatives of national supervisors. If the national supervisory authority designated by a Member State is not a national central bank (NCB), the representative of the competent authority can be accompanied by a representative from their NCB. In such cases, the representatives are together considered as one member for the purposes of the voting procedure. It also includes the Steering Committee, which supports the activities of the supervisory board and prepares the Board's meetings. It is composed by the chair of the supervisory board, Vice-chair of the supervisory board, one ECB representative and five representatives of national supervisors. The five representatives of national supervisors are appointed by the supervisory board for one year based on a rotation system that ensures a fair representation of countries. The ECB is governed by European law directly, but its set-up resembles that of a corporation in the sense that the ECB has shareholders and stock capital. Its initial capital was supposed to be €5 billion and the initial capital allocation key was determined in 1998 on the basis of the member states' populations and GDP, but the key is adjustable. The euro area NCBs were required to pay their respective subscriptions to the ECB's capital in full. The NCBs of the non-participating countries have had to pay 7% of their respective subscriptions to the ECB's capital as a contribution to the operational costs of the ECB. As a result, the ECB was endowed with an initial capital of just under €4 billion. The capital is held by the national central banks of the member states as shareholders. Shares in the ECB are not transferable and cannot be used as collateral. The NCBs are the sole subscribers to and holders of the capital of the ECB. Today, ECB capital is about €11 billion, which is held by the national central banks of the member states as shareholders. The NCBs' shares in this capital are calculated using a capital key which reflects the respective member's share in the total population and gross domestic product of the EU. The ECB adjusts the shares every five years and whenever the number of contributing NCBs changes. The adjustment is made on the basis of data provided by the European Commission. All national central banks (NCBs) that own a share of the ECB capital stock as of 1 February 2020 are listed below. Non-Euro area NCBs are required to pay up only a very small percentage of their subscribed capital, which accounts for the different magnitudes of Euro area and Non-Euro area total paid-up capital. In addition to capital subscriptions, the NCBs of the member states participating in the euro area provided the ECB with foreign reserve assets equivalent to around €40 billion. The contributions of each NCB is in proportion to its share in the ECB's subscribed capital, while in return each NCB is credited by the ECB with a claim in euro equivalent to its contribution. 15% of the contributions was made in gold, and the remaining 85% in US dollars and UK pounds sterling. The internal working language of the ECB is English, and press conferences are held in English. External communications are handled flexibly: English is preferred (though not exclusively) for communication within the ESCB (i.e. with other central banks) and with financial markets; communication with other national bodies and with EU citizens is normally in their respective language, but the ECB website is predominantly English; official documents such as the Annual Report are in the official languages of the EU (generally English, German and French). In 2022, the ECB publishes for the first time details on the nationality of its staff, revealing an over-representation of Germans and Italians along the ECB employees, including in management positions. The European Central Bank (and by extension, the Eurosystem) is often considered as the "most independent central bank in the world". In general terms, this means that the Eurosystem tasks and policies can be discussed, designed, decided and implemented in full autonomy, without pressure or need for instructions from any external body. The main justification for the ECB's independence is that such an institutional setup assists the maintenance of price stability. In practice, the ECB's independence is pinned by four key principles: In return to its high degree of independence and discretion, the ECB is accountable to the European Parliament (and to a lesser extent to the European Court of Auditors, the European Ombudsman and the Court of Justice of the EU (CJEU)). Although the accountability mechanisms are not enshrined in EU law, several practices were established following a resolution of the European Parliament adopted in 1998, which were informally agreed by the ECB, and incorporated into the Parliament's rule of procedure. In 2023, the European Parliament and the ECB made these accountability arrangements were made more formal by signing an exchange of letter. The accountability framework involves five main mechanisms: In 2013, an interinstitutional agreement was reached between the ECB and the European Parliament in the context of the establishment of the ECB's Banking Supervision. This agreement sets broader powers to the European Parliament than the established practice on the monetary policy side of the ECB's activities. For example, under the agreement, the Parliament can veto the appointment of the chair and vice-chair of the ECB's supervisory board and may approve removals if requested by the ECB. In addition to its independence, the ECB is subject to limited transparency obligations in contrast to EU Institutions standards and other major central banks. Indeed, as pointed out by Transparency International, "The Treaties establish transparency and openness as principles of the EU and its institutions. They do, however, grant the ECB a partial exemption from these principles. According to Art. 15(3) TFEU, the ECB is bound by the EU's transparency principles "only when exercising [its] administrative tasks" (the exemption – which leaves the term "administrative tasks" undefined – equally applies to the Court of Justice of the European Union and to the European Investment Bank)." In practice, there are several concrete examples where the ECB is less transparent than other institutions: The bank is based in Ostend (East End), Frankfurt am Main. The city is the largest financial centre in the Eurozone and the bank's location in it is fixed by the Amsterdam Treaty. The bank moved to a new purpose-built headquarters in 2014, designed by a Vienna-based architectural office, Coop Himmelbau. The building is approximately 180 metres (591 ft) tall and is to be accompanied by other secondary buildings on a landscaped site on the site of the former wholesale market in the eastern part of Frankfurt am Main. The main construction on a 120,000 m total site area began in October 2008, and it was expected that the building would become an architectural symbol for Europe. While it was designed to accommodate double the number of staff who operated in the former Eurotower, that building has been retained by the ECB, owing to more space being required since it took responsibility for banking supervision. The debate on the independence of the ECB finds its origins in the preparatory stages of the construction of the EMU. The German government agreed to go ahead if certain crucial guarantees were respected, such as a European Central Bank independent of national governments and shielded from political pressure along the lines of the German central bank. The French government, for its part, feared that this independence would mean that politicians would no longer have any room for manoeuvre in the process. A compromise was then reached by establishing a regular dialogue between the ECB and the Council of Finance Ministers of the euro area, the Eurogroupe. There is strong consensus among economists on the value of central bank independence from politics. The rationale behind are both empirical and theoretical. On the theoretical side, it's believed that time inconsistency suggests the existence of political business cycles where elected officials might take advantage of policy surprises to secure reelection. The politician up to the election will therefore be incentivized to introduce expansionary monetary policies, reducing unemployment in the short run. These effects will be most likely temporary. By contrast, in the long run, it will increase inflation, with unemployment returning to the natural rate negating the positive effect. Furthermore, the credibility of the central bank will deteriorate, making it more difficult to answer the market. Additionally, empirical work has been done that defined and measured central bank independence (CBI), looking at the relationship of CBI with inflation. Demystify the independence of central bankers: According to Christopher Adolph (2009), the alleged neutrality of central bankers is only a legal façade and not an indisputable fact. To achieve this, the author analyses the professional careers of central bankers and mirrors them with their respective monetary decision-making. To explain the results of his analysis, he utilizes he uses the "principal-agent" theory. To explain that in order to create a new entity, one needs a delegator or principal (in this case the heads of state or government of the euro area) and a delegate or agent (in this case the ECB). In his illustration, he describes the financial community as a "shadow principale" which influences the choice of central bankers thus indicating that the central banks indeed act as interfaces between the financial world and the States. It is therefore not surprising, still according to the author, to regain their influence and preferences in the appointment of central bankers, presumed conservative, neutral and impartial according to the model of the Independent Central Bank (ICB), which eliminates this famous "temporal inconsistency". Central bankers had a professional life before joining the central bank and their careers will most likely continue after their tenure. They are ultimately human beings. Therefore, for the author, central bankers have interests of their own, based on their past careers and their expectations after joining the ECB, and try to send messages to their future potential employers. The crisis: an opportunity to impose its will and extend its powers: – Its participation in the troika: Thanks to its three factors which explain its independence, the ECB took advantage of this crisis to implement, through its participation in the troika, the famous structural reforms in the Member States aimed at making, more flexible the various markets, particularly the labour market, which are still considered too rigid under the ordoliberal concept. - Macro-prudential supervision : At the same time, taking advantage of the reform of the financial supervision system, the Frankfurt Bank has acquired new responsibilities, such as macro-prudential supervision, in other words, supervision of the provision of financial services. -Take liberties with its mandate to save the Euro : Paradoxically, the crisis undermined the ECB's ordoliberal discourse "because some of its instruments, which it had to implement, deviated significantly from its principles. It then interpreted the paradigm with enough flexibly to adapt its original reputation to these new economic conditions. It was forced to do so as a last resort to save its one and only raison d'être: the euro. This Independent was thus obliged to be pragmatic by departing from the spirit of its statutes, which is unacceptable to the hardest supporters of ordoliberalism, which will lead to the resignation of the two German leaders present within the ECB: the governor of the Bundesbank, Jens WEIDMANN and the member of the executive board of the ECB, Jürgen STARK. – Regulation of the financial system : The delegation of this new function to the ECB was carried out with great simplicity and with the consent of European leaders, because neither the Commission nor the Member States really wanted to obtain the monitoring of financial abuses throughout the area. In other words, in the event of a new financial crisis, the ECB would be the perfect scapegoat. - Capturing exchange rate policy : The event that will most mark the definitive politicization of the ECB is, of course, the operation launched in January 2015: the quantitative easing (QE) operation. Indeed, the Euro is an overvalued currency on the world markets against the dollar and the Euro zone is at risk of deflation. In addition, Member States find themselves heavily indebted, partly due to the rescue of their national banks. The ECB, as the guardian of the stability of the euro zone, is deciding to gradually buy back more than EUR 1 100 billion Member States' public debt. In this way, money is injected back into the economy, the euro depreciates significantly, prices rise, the risk of deflation is removed, and Member States reduce their debts. However, the ECB has just given itself the right to direct the exchange rate policy of the euro zone without this being granted by the Treaties or with the approval of European leaders, and without public opinion or the public arena being aware of this. In conclusion, for those in favour of a framework for ECB independence, there is a clear concentration of powers. In the light of these facts, it is clear that the ECB is no longer the simple guardian of monetary stability in the euro area, but has become, over the course of the crisis, a "multi-competent economic player, at ease in this role that no one, especially not the agnostic governments of the euro Member States, seems to have the idea of challenging". This new political super-actor, having captured many spheres of competence and a very strong influence in the economic field in the broad sense (economy, finance, budget...). This new political super-actor can no longer act alone and refuse a counter-power, consubstantial to our liberal democracies. Indeed, the status of independence which the ECB enjoys by essence should not exempt it from a real responsibility regarding the democratic process. In the aftermath of the euro area crisis, several proposals for a countervailing power were put forward, to deal with criticisms of a democratic deficit. For the German economist German Issing (2001) the ECB as a democratic responsibility and should be more transparent. According to him, this transparency could bring several advantages as the improvement of efficiency and credibility by giving the public adequate information. Others think that the ECB should have a closer relationship with the European Parliament which could play a major role in the evaluation of the democratic responsibility of the ECB. The development of new institutions or the creation of a minister is another solution proposed: A minister for the Eurozone ? The idea of a eurozone finance minister is regularly raised and supported by certain political figures, including Emmanuel Macron, as well as former German Chancellor Angela Merkel, former President of the ECB Jean-Claude Trichet and former European Commissioner Pierre Moscovici. For the latter, this position would bring "more democratic legitimacy" and "more efficiency" to European politics. In his view, it is a question of merging the powers of Commissioner for the Economy and Finance with those of the President of the Eurogroup. The main task of this minister would be to "represent a strong political authority protecting the economic and budgetary interests of the euro area as a whole, and not the interests of individual Member States". According to the Jacques Delors Institute, its competencies could be as follows: For Jean-Claude Trichet, this minister could also rely on the Eurogroup working group for the preparation and follow-up of meetings in eurozone format, and on the Economic and Financial Committee for meetings concerning all Member States. He would also have under his authority a General Secretariat of the Treasury of the euro area, whose tasks would be determined by the objectives of the budgetary union currently being set up This proposal was nevertheless rejected in 2017 by the Eurogroup, its president, Jeroen Dijsselbloem, spoke of the importance of this institution in relation to the European Commission. Towards democratic institutions ? The absence of democratic institutions such as a Parliament or a real government is a regular criticism of the ECB in its management of the euro area, and many proposals have been made in this respect, particularly after the economic crisis, which would have shown the need to improve the governance of the euro area. For Moïse Sidiropoulos, a professor in economy: "The crisis in the euro zone came as no surprise, because the euro remains an unfinished currency, a stateless currency with a fragile political legitimacy". French economist Thomas Piketty wrote on his blog in 2017 that it was essential to equip the eurozone with democratic institutions. An economic government could for example enable it to have a common budget, common taxes and borrowing and investment capacities. Such a government would then make the euro area more democratic and transparent by avoiding the opacity of a council such as the Eurogroup. Nevertheless, according to him "there is no point in talking about a government of the eurozone if we do not say to which democratic body this government will be accountable", a real parliament of the eurozone to which a finance minister would be accountable seems to be the real priority for the economist, who also denounces the lack of action in this area. The creation of a sub-committee within the current European Parliament was also mentioned, in the model of the Eurogroup, which is currently an under-formation of the ECOFIN Committee. This would require a simple amendment to the rules of procedure and would avoid a competitive situation between two separate parliamentary assemblies. The former President of the European Commission had, moreover, stated on this subject that he had "no sympathy for the idea of a specific Eurozone Parliament".
[ { "paragraph_id": 0, "text": "The European Central Bank (ECB) is the prime component of the Eurosystem and the European System of Central Banks (ESCB) as well as one of seven institutions of the European Union. It is one of the world's most important central banks.", "title": "" }, { "paragraph_id": 1, "text": "The ECB Governing Council makes monetary policy for the Eurozone and the European Union, administers the foreign exchange reserves of EU member states, engages in foreign exchange operations, and defines the intermediate monetary objectives and key interest rate of the EU. The ECB Executive Board enforces the policies and decisions of the Governing Council, and may direct the national central banks when doing so. The ECB has the exclusive right to authorise the issuance of euro banknotes. Member states can issue euro coins, but the volume must be approved by the ECB beforehand. The bank also operates the TARGET2 payments system.", "title": "" }, { "paragraph_id": 2, "text": "The ECB was established by the Treaty of Amsterdam in May 1999 with the purpose of guaranteeing and maintaining price stability. On 1 December 2009, the Treaty of Lisbon became effective and the bank gained the official status of an EU institution. When the ECB was created, it covered a Eurozone of eleven members. Since then, Greece joined in January 2001, Slovenia in January 2007, Cyprus and Malta in January 2008, Slovakia in January 2009, Estonia in January 2011, Latvia in January 2014, Lithuania in January 2015 and Croatia in January 2023. The current President of the ECB is Christine Lagarde. Seated in Frankfurt, Germany, the bank formerly occupied the Eurotower prior to the construction of its new seat.", "title": "" }, { "paragraph_id": 3, "text": "The ECB is directly governed by European Union law. Its capital stock, worth €11 billion, is owned by all 27 central banks of the EU member states as shareholders. The initial capital allocation key was determined in 1998 on the basis of the states' population and GDP, but the capital key has been readjusted since. Shares in the ECB are not transferable and cannot be used as collateral.", "title": "" }, { "paragraph_id": 4, "text": "The European Central Bank is the de facto successor of the European Monetary Institute (EMI). The EMI was established at the start of the second stage of the EU's Economic and Monetary Union (EMU) to handle the transitional issues of states adopting the euro and prepare for the creation of the ECB and European System of Central Banks (ESCB). The EMI itself took over from the earlier European Monetary Cooperation Fund (EMCF).", "title": "History" }, { "paragraph_id": 5, "text": "The ECB formally replaced the EMI on 1 June 1998 by virtue of the Treaty on European Union (TEU, Treaty of Maastricht), however it did not exercise its full powers until the introduction of the euro on 1 January 1999, signalling the third stage of EMU. The bank was the final institution needed for EMU, as outlined by the EMU reports of Pierre Werner and President Jacques Delors. It was established on 1 June 1998 The first President of the Bank was Wim Duisenberg, the former president of the Dutch central bank and the European Monetary Institute. While Duisenberg had been the head of the EMI (taking over from Alexandre Lamfalussy of Belgium) just before the ECB came into existence, the French government wanted Jean-Claude Trichet, former head of the French central bank, to be the ECB's first president. The French argued that since the ECB was to be located in Germany, its president should be French. This was opposed by the German, Dutch and Belgian governments who saw Duisenberg as a guarantor of a strong euro. Tensions were abated by a gentleman's agreement in which Duisenberg would stand down before the end of his mandate, to be replaced by Trichet.", "title": "History" }, { "paragraph_id": 6, "text": "Trichet replaced Duisenberg as president in November 2003. Until 2007, the ECB had very successfully managed to maintain inflation close but below 2%.", "title": "History" }, { "paragraph_id": 7, "text": "The European Central Bank underwent through a deep internal transformation as it faced the global financial crisis and the Eurozone debt crisis.", "title": "History" }, { "paragraph_id": 8, "text": "The so-called European debt crisis began after Greece's new elected government uncovered the real level indebtedness and budget deficit and warned EU institutions of the imminent danger of a Greek sovereign default.", "title": "History" }, { "paragraph_id": 9, "text": "Foreseeing a possible sovereign default in the eurozone, the general public, international and European institutions, and the financial community reassessed the economic situation and creditworthiness of some Eurozone member states. Consequently, sovereign bonds yields of several Eurozone countries started to rise sharply. This provoked a self-fulfilling panic on financial markets: the more Greek bonds yields rose, the more likely a default became possible, the more bond yields increased in turn.", "title": "History" }, { "paragraph_id": 10, "text": "This panic was also aggravated because of the reluctance of the ECB to react and intervene on sovereign bond markets for two reasons. First, because the ECB's legal framework normally forbids the purchase of sovereign bonds in the primary market (Article 123. TFEU), An over-interpretation of this limitation, inhibited the ECB from implementing quantitative easing like the Federal Reserve and the Bank of England did as soon as 2008, which played an important role in stabilizing markets.", "title": "History" }, { "paragraph_id": 11, "text": "Secondly, a decision by the ECB made in 2005 introduced a minimum credit rating (BBB-) for all Eurozone sovereign bonds to be eligible as collateral to the ECB's open market operations. This meant that if a private rating agencies were to downgrade a sovereign bond below that threshold, many banks would suddenly become illiquid because they would lose access to ECB refinancing operations. According to former member of the governing council of the ECB Athanasios Orphanides, this change in the ECB's collateral framework \"planted the seed\" of the euro crisis.", "title": "History" }, { "paragraph_id": 12, "text": "Faced with those regulatory constraints, the ECB led by Jean-Claude Trichet in 2010 was reluctant to intervene to calm down financial markets. Up until 6 May 2010, Trichet formally denied at several press conferences the possibility of the ECB to embark into sovereign bonds purchases, even though Greece, Ireland, Portugal, Spain and Italy faced waves of credit rating downgrades and increasing interest rate spreads.", "title": "History" }, { "paragraph_id": 13, "text": "In a remarkable u-turn, the ECB announced on 10 May 2010, the launch of a \"Securities Market Programme\" (SMP) which involved the discretionary purchase of sovereign bonds in secondary markets. Extraordinarily, the decision was taken by the Governing Council during a teleconference call only three days after the ECB's usual meeting of 6 May (when Trichet still denied the possibility of purchasing sovereign bonds). The ECB justified this decision by the necessity to \"address severe tensions in financial markets.\" The decision also coincided with the EU leaders decision of 10 May to establish the European Financial Stabilisation mechanism, which would serve as a crisis fighting fund to safeguard the euro area from future sovereign debt crisis.", "title": "History" }, { "paragraph_id": 14, "text": "Although at first limited to the debt of Greece, Ireland and Portugal, the bulk of the ECB's bond buying eventually consisted of Spanish and Italian debt. These purchases were intended to dampen international speculation against stressed countries, and thus avoid a contagion of the Greek crisis towards other Eurozone countries. The assumption—largely justified—was that speculative activity would decrease over time and the value of the assets increase.", "title": "History" }, { "paragraph_id": 15, "text": "Although SMP purchases did inject liquidity into financial markets, all of these injections were \"sterilized\" through weekly liquidity absorption. So the operation was net neutral in liquidity terms (though this was of little practical importance since normal monetary policy operations were ensuring unlimited supplies of liquidity at the main policy interest rate).", "title": "History" }, { "paragraph_id": 16, "text": "In September 2011, ECB's Board member Jürgen Stark, resigned in protest against the \"Securities Market Programme\" which involved the purchase of sovereign bonds from Southern member states, a move that he considered as equivalent to monetary financing, which is prohibited by the EU Treaty. The Financial Times Deutschland referred to this episode as \"the end of the ECB as we know it\", referring to its hitherto perceived \"hawkish\" stance on inflation and its historical Deutsche Bundesbank influence.", "title": "History" }, { "paragraph_id": 17, "text": "As of 18 June 2012, the ECB in total had spent €212.1bn (equal to 2.2% of the Eurozone GDP) for bond purchases covering outright debt, as part of the Securities Markets Programme. Controversially, the ECB made substantial profits out of SMP, which were largely redistributed to Eurozone countries. In 2013, the Eurogroup decided to refund those profits to Greece, however, the payments were suspended from 2014 until 2017 over the conflict between Yanis Varoufakis and ministers of the Eurogroup. In 2018, profits refunds were reinstalled by the Eurogroup. However, several NGOs complained that a substantial part of the ECB profits would never be refunded to Greece.", "title": "History" }, { "paragraph_id": 18, "text": "The ECB played a controversial role in the \"Troika\" by rejecting most forms of debt restructuring of public and bank debts, and pressing governments to adopt bailout programmes and structural reforms through secret letters to Italian, Spanish, Greek and Irish governments. It has further been accused of interfering in the Greek referendum of July 2015 by constraining liquidity to Greek commercial banks.", "title": "History" }, { "paragraph_id": 19, "text": "In November 2010, reflecting the huge increase in borrowing, including the cover the cost of having guaranteed the liabilities of banks, the cost of borrowing in the private financial markets had become prohibitive for the Irish government. Although it had deferred the cash cost of recapitalising the failing Anglo Irish Bank by nationalising it and issuing it with a \"promissory note\" (an IOU), the Government also faced a large deficit on its non-banking activities, and it therefore turned to the official sector for a loan to bridge the shortfall until its finances were credibly back on a sustainable footing. (Meanwhile, Anglo used the promissory note as collateral for its emergency loan (ELA) from the Central Bank. This enabled Anglo was able to repay its depositors and bondholders.", "title": "History" }, { "paragraph_id": 20, "text": "It became clear later that the ECB played a key role in making sure the Irish government did not let Anglo default on its debts, to avoid financial instability risks. On 15 October and 6 November 2010, the ECB President Jean-Claude Trichet sent two secret letters to the Irish finance Minister which essentially informed the Irish government of the possible suspension of ELA's credit lines, unless the government requested a financial assistance programme to the Eurogroup under the condition of further reforms and fiscal consolidation.", "title": "History" }, { "paragraph_id": 21, "text": "In addition, the ECB insisted that no debt restructuring (or bail-in) should be applied to the nationalized banks' bondholders, a measure which could have saved Ireland 8 billion euros.", "title": "History" }, { "paragraph_id": 22, "text": "During 2012, the ECB pressed for an early end to the ELA, and this situation was resolved with the liquidation of the successor institution IBRC in February 2013. The promissory note was exchanged for much longer term marketable floating rate notes which were disposed of by the Central Bank over the following decade.", "title": "History" }, { "paragraph_id": 23, "text": "In April 2011, the ECB raised interest rates for the first time since 2008 from 1% to 1.25%, with a further increase to 1.50% in July 2011. However, in 2012–2013 the ECB sharply lowered interest rates to encourage economic growth, reaching the historically low 0.25% in November 2013. Soon after the rates were cut to 0.15%, then on 4 September 2014 the central bank reduced the rates by two-thirds from 0.15% to 0.05%. Recently, the interest rates were further reduced reaching 0.00%, the lowest rates on record.", "title": "History" }, { "paragraph_id": 24, "text": "In a report adopted on 13 March 2014, the European Parliament criticized the \"potential conflict of interest between the current role of the ECB in the Troika as 'technical advisor' and its position as a creditor of the four Member States, as well as its mandate under the Treaty\". The report was led by Austrian right-wing MEP Othmar Karas and French Socialist MEP Liem Hoang Ngoc.", "title": "History" }, { "paragraph_id": 25, "text": "On 1 November 2011, Mario Draghi replaced Jean-Claude Trichet as President of the ECB. This change in leadership also marks the start of a new era under which the ECB will become more and more interventionist and eventually ended the Eurozone sovereign debt crisis.", "title": "History" }, { "paragraph_id": 26, "text": "Draghi's presidency started with the impressive launch of a new round of 1% interest loans with a term of three years (36 months) – the Long-term Refinancing operations (LTRO). Under this programme, 523 Banks tapped as much as €489.2 bn (US$640 bn). Observers were surprised by the volume of loans made when it was implemented. By far biggest amount of €325bn was tapped by banks in Greece, Ireland, Italy and Spain. Although those LTROs loans did not directly benefit EU governments, it effectively allowed banks to do a carry trade, by lending off the LTROs loans to governments with an interest margin. The operation also facilitated the rollover of €200bn of maturing bank debts in the first three months of 2012.", "title": "History" }, { "paragraph_id": 27, "text": "Facing renewed fears about sovereigns in the eurozone continued Mario Draghi made a decisive speech in London, by declaring that the ECB \"...is ready to do whatever it takes to preserve the Euro. And believe me, it will be enough.\" In light of slow political progress on solving the eurozone crisis, Draghi's statement has been seen as a key turning point in the eurozone crisis, as it was immediately welcomed by European leaders, and led to a steady decline in bond yields for eurozone countries, in particular Spain, Italy and France.", "title": "History" }, { "paragraph_id": 28, "text": "Following up on Draghi's speech, on 6 September 2012 the ECB announced the Outright Monetary Transactions programme (OMT). Unlike the previous SMP programme, OMT has no ex-ante time or size limit. However, the activation of the purchases remains conditioned to the adherence by the benefitting country to an adjustment programme to the ESM. The program was adopted with near unanimity, the Bundesbank president Jens Weidmann being the sole member of the ECB's Governing Council to vote against it.", "title": "History" }, { "paragraph_id": 29, "text": "Even if OMT was never actually implemented until today, it made the \"Whatever it takes\" pledge credible and significantly contributed to stabilizing financial markets and ending the sovereign debt crisis. According to various sources, the OMT programme and \"whatever it takes\" speeches were made possible because EU leaders previously agreed to build the banking union.", "title": "History" }, { "paragraph_id": 30, "text": "In November 2014, the bank moved into its new premises, while the Eurotower building was dedicated to hosting the newly established supervisory activities of the ECB under European Banking Supervision.", "title": "History" }, { "paragraph_id": 31, "text": "Although the sovereign debt crisis was almost solved by 2014, the ECB started to face a repeated decline in the Eurozone inflation rate, indicating that the economy was going towards a deflation. Responding to this threat, the ECB announced on 4 September 2014 the launch of two bond buying purchases programmes: the Covered Bond Purchasing Programme (CBPP3) and Asset-Backed Securities Programme (ABSPP).", "title": "History" }, { "paragraph_id": 32, "text": "On 22 January 2015, the ECB announced an extension of those programmes within a full-fledge \"quantitative easing\" programme which also included sovereign bonds, to the tune of 60 billion euros per month up until at least September 2016. The programme was started on 9 March 2015.", "title": "History" }, { "paragraph_id": 33, "text": "On 8 June 2016, the ECB added corporate bonds to its asset purchases portfolio with the launch of the corporate sector purchase programme (CSPP). Under this programme, it conducted the net purchase of corporate bonds until January 2019 to reach about €177 billion. While the programme was halted for 11 months in January 2019, the ECB restarted net purchases in November 2019.", "title": "History" }, { "paragraph_id": 34, "text": "As of 2021, the size of the ECB's quantitative easing programme had reached 2947 billion euros.", "title": "History" }, { "paragraph_id": 35, "text": "In July 2019, EU leaders nominated Christine Lagarde to replace Mario Draghi as ECB President. Lagarde resigned from her position as managing director of the International Monetary Fund in July 2019 and formally took over the ECB's presidency on 1 November 2019.", "title": "History" }, { "paragraph_id": 36, "text": "Lagarde immediately signalled a change of style in the ECB's leadership. She embarked the ECB on a strategic review of the ECB's monetary policy strategy, an exercise the ECB had not done for 17 years. As part of this exercise, Lagarde committed the ECB to look into how monetary policy could contribute to address climate change, and promised that \"no stone would be left unturned.\" The ECB president also adopted a change of communication style, in particular in her use of social media to promote gender equality, and by opening dialogue with civil society stakeholders.", "title": "History" }, { "paragraph_id": 37, "text": "In March 2020, the ECB responded quickly and boldly by launching a package of measures including a new asset purchase programme: the €1,350 billion Pandemic Emergency Purchase Programme (PEPP) which aimed to lower borrowing costs and increase lending in the euro area. The PEPP was extended to cover an additional €500 billion in December 2020. The ECB also re-launched more TLTRO loans to banks at historically low levels and record-high take-up (€1.3 trillion in June 2020). Lending by banks to SMEs was also facilitated by collateral easing measures, and other supervisory relaxations. The ECB also reactivated currency swap lines and enhanced existing swap lines with central banks across the globe.", "title": "History" }, { "paragraph_id": 38, "text": "As a consequence of the COVID-19 crisis, the ECB extended the duration of the strategy review until September 2021. On 13 July 2021, the ECB presented the outcomes of the strategy review, with the main following announcements:", "title": "History" }, { "paragraph_id": 39, "text": "The ECB also said it would carry out another strategy review in 2025.", "title": "History" }, { "paragraph_id": 40, "text": "The ECB has one primary objective – price stability – subject to which it may pursue secondary objectives.", "title": "Mandate and inflation target" }, { "paragraph_id": 41, "text": "The primary objective of the European Central Bank, set out in Article 127(1) of the Treaty on the Functioning of the European Union, is to maintain price stability within the Eurozone. However the EU Treaties do not specify exactly how the ECB should pursue this objective. The European Central Bank has ample discretion over the way it pursues its price stability objective, as it can self-decide on the inflation target, and may also influence the way inflation is being measured.", "title": "Mandate and inflation target" }, { "paragraph_id": 42, "text": "Since 2021, the ECB has defined its objective as targeting an inflation rate of 2%. Before that, the precise formulation of the price stability objective has changed over the years:", "title": "Mandate and inflation target" }, { "paragraph_id": 43, "text": "The Governing Council in October 1998 defined price stability as inflation of under 2%, \"a year-on-year increase in the Harmonised Index of Consumer Prices (HICP) for the euro area of below 2%\" and added that price stability \"was to be maintained over the medium term\". In May 2003, following a thorough review of the ECB's monetary policy strategy, the Governing Council clarified that \"in the pursuit of price stability, it aims to maintain inflation rates below, but close to, 2% over the medium term\". In 2016, the European Central Bank's president has further adjusted its communication, by introducing the notion of \"symmetry\" in its definition of its target, thus making it clear that the ECB should respond both to inflationary pressure to deflationary pressures. As Draghi once said \"symmetry meant not only that we would not accept persistently low inflation, but also that there was no cap on inflation at 2%.\"", "title": "Mandate and inflation target" }, { "paragraph_id": 44, "text": "On 8 July 2021, as a result of the strategic review led by the new president Christine Lagarde, the ECB officially abandoned the \"below but close to two per cent\" definition and adopted instead a 2% symmetric target.", "title": "Mandate and inflation target" }, { "paragraph_id": 45, "text": "Without prejudice to the objective of price stability, the Treaty (127 TFEU) also provides room for the ECB to pursue other objectives:", "title": "Mandate and inflation target" }, { "paragraph_id": 46, "text": "Without prejudice to the objective of price stability, the ESCB shall support the general economic policies in the Union with a view to contributing to the achievement of the objectives of the Union as laid down in Article 3 of the Treaty on European Union.", "title": "Mandate and inflation target" }, { "paragraph_id": 47, "text": "This legal provision is often considered to provide a \"secondary mandate\" to the ECB and offers ample justifications for the ECB to also prioritize other considerations such as full employment or environmental protection, which are mentioned in the Article 3 of the Treaty on the European Union. At the same time, economists and commentators are often divided on whether and how the ECB should pursue those secondary objectives, in particular the environmental impact. ECB official have also frequently pointed out the possible contradictions between those secondary objectives. To better guide the ECB's action on its secondary objectives, it has been suggested that closer consultation with the European Parliament would be warranted. In 2023, the ECB recognised the possible role of the European Parliament in the prioritisation of its secondary objectives.", "title": "Mandate and inflation target" }, { "paragraph_id": 48, "text": "To carry out its main mission, the ECB's tasks include:", "title": "Mandate and inflation target" }, { "paragraph_id": 49, "text": "The principal monetary policy tool of the European central bank is collateralised borrowing or repo agreements. The collateral used by the ECB is typically high quality public and private sector debt.", "title": "Mandate and inflation target" }, { "paragraph_id": 50, "text": "All lending to credit institutions must be collateralised as required by Article 18 of the Statute of the ESCB.", "title": "Mandate and inflation target" }, { "paragraph_id": 51, "text": "The criteria for determining \"high quality\" for public debt have been preconditions for membership in the European Union: total debt must not be too large in relation to a gross domestic product, for example, and deficits in any given year must not become too large. Though these criteria are fairly simple, a number of accounting techniques may hide the underlying reality of fiscal solvency—or the lack of the same.", "title": "Mandate and inflation target" }, { "paragraph_id": 52, "text": "In the United States Federal Reserve Bank, the Federal Reserve buys assets: typically, bonds issued by the Federal government. There is no limit on the bonds that it can buy and one of the tools at its disposal in a financial crisis is to take such extraordinary measures as the purchase of large amounts of assets such as commercial paper. The purpose of such operations is to ensure that adequate liquidity is available for the functioning of the financial system.", "title": "Mandate and inflation target" }, { "paragraph_id": 53, "text": "The Eurosystem, on the other hand, uses collateralized lending as a default instrument. There are about 1,500 eligible banks which may bid for short-term repo contracts. The difference is that banks in effect borrow cash from the ECB and must pay it back; the short durations allow interest rates to be adjusted continually. When the repo notes come due the participating banks bid again. An increase in the number of notes offered at auction allows an increase in liquidity in the economy. A decrease has the contrary effect. The contracts are carried on the asset side of the European Central Bank's balance sheet and the resulting deposits in member banks are carried as a liability. In layman's terms, the liability of the central bank is money, and an increase in deposits in member banks carried as a liability by the central bank, means that more money has been put into the economy.", "title": "Mandate and inflation target" }, { "paragraph_id": 54, "text": "To qualify for participation in the auctions, banks must be able to offer proof of appropriate collateral in the form of loans to other entities. These can be the public debt of member states, but a fairly wide range of private banking securities are also accepted. The fairly stringent membership requirements for the European Union, especially with regard to sovereign debt as a percentage of each member state's gross domestic product, are designed to ensure that assets offered to the bank as collateral are, at least in theory, all equally good, and all equally protected from the risk of inflation.", "title": "Mandate and inflation target" }, { "paragraph_id": 55, "text": "The ECB has four decision-making bodies, that take all the decisions with the objective of fulfilling the ECB's mandate:", "title": "Organization" }, { "paragraph_id": 56, "text": "The Executive Board is responsible for the implementation of monetary policy (defined by the Governing Council) and the day-to-day running of the bank. It can issue decisions to national central banks and may also exercise powers delegated to it by the Governing Council. Executive Board members are assigned a portfolio of responsibilities by the President of the ECB. The executive board normally meets every Tuesday.", "title": "Organization" }, { "paragraph_id": 57, "text": "It is composed of the President of the Bank (currently Christine Lagarde), the vice-president (currently Luis de Guindos) and four other members. They are all appointed by the European Council for non-renewable terms of eight years. Members of the executive board of the ECB are appointed \"from among persons of recognised standing and professional experience in monetary or banking matters by common accord of the governments of the Member States at the level of Heads of State or Government, on a recommendation from the Council, after it has consulted the European Parliament and the Governing Council of the ECB\".", "title": "Organization" }, { "paragraph_id": 58, "text": "José Manuel González-Páramo, a Spanish member of the executive board since June 2004, was due to leave the board in early June 2012, but no replacement had been named as of late May. The Spanish had nominated Barcelona-born Antonio Sáinz de Vicuña – an ECB veteran who heads its legal department – as González-Páramo's replacement as early as January 2012, but alternatives from Luxembourg, Finland, and Slovenia were put forward and no decision made by May. After a long political battle and delays due to the European Parliament's protest over the lack of gender balance at the ECB, Luxembourg's Yves Mersch was appointed as González-Páramo's replacement.", "title": "Organization" }, { "paragraph_id": 59, "text": "In December 2020, Frank Elderson succeeded to Yves Mersch at the ECB's board.", "title": "Organization" }, { "paragraph_id": 60, "text": "The Governing Council is the main decision-making body of the Eurosystem. It comprises the members of the executive board (six in total) and the governors of the National Central Banks of the euro area countries (20 as of 2023).", "title": "Organization" }, { "paragraph_id": 61, "text": "According to Article 284 of the TFEU, the President of the European Council and a representative from the European Commission may attend the meetings as observers, but they lack voting rights.", "title": "Organization" }, { "paragraph_id": 62, "text": "Since January 2015, the ECB has published on its website a summary of the Governing Council deliberations (\"accounts\"). These publications came as a partial response to recurring criticism against the ECB's opacity. However, in contrast to other central banks, the ECB still does not disclose individual voting records of the governors seating in its council.", "title": "Organization" }, { "paragraph_id": 63, "text": "The General Council is a body dealing with transitional issues of euro adoption, for example, fixing the exchange rates of currencies being replaced by the euro (continuing the tasks of the former EMI). It will continue to exist until all EU member states adopt the euro, at which point it will be dissolved. It is composed of the President and vice-president together with the governors of all of the EU's national central banks.", "title": "Organization" }, { "paragraph_id": 64, "text": "The ECB Supervisory Board meets twice a month to discuss, plan and carry out the ECB's supervisory tasks. It proposes draft decisions to the Governing Council under the non-objection procedure. It is composed of Chair (appointed for a non-renewable term of five years), Vice-chair (chosen from among the members of the ECB's executive board) four ECB representatives and representatives of national supervisors. If the national supervisory authority designated by a Member State is not a national central bank (NCB), the representative of the competent authority can be accompanied by a representative from their NCB. In such cases, the representatives are together considered as one member for the purposes of the voting procedure.", "title": "Organization" }, { "paragraph_id": 65, "text": "It also includes the Steering Committee, which supports the activities of the supervisory board and prepares the Board's meetings. It is composed by the chair of the supervisory board, Vice-chair of the supervisory board, one ECB representative and five representatives of national supervisors. The five representatives of national supervisors are appointed by the supervisory board for one year based on a rotation system that ensures a fair representation of countries.", "title": "Organization" }, { "paragraph_id": 66, "text": "The ECB is governed by European law directly, but its set-up resembles that of a corporation in the sense that the ECB has shareholders and stock capital. Its initial capital was supposed to be €5 billion and the initial capital allocation key was determined in 1998 on the basis of the member states' populations and GDP, but the key is adjustable. The euro area NCBs were required to pay their respective subscriptions to the ECB's capital in full. The NCBs of the non-participating countries have had to pay 7% of their respective subscriptions to the ECB's capital as a contribution to the operational costs of the ECB. As a result, the ECB was endowed with an initial capital of just under €4 billion. The capital is held by the national central banks of the member states as shareholders. Shares in the ECB are not transferable and cannot be used as collateral. The NCBs are the sole subscribers to and holders of the capital of the ECB.", "title": "Organization" }, { "paragraph_id": 67, "text": "Today, ECB capital is about €11 billion, which is held by the national central banks of the member states as shareholders. The NCBs' shares in this capital are calculated using a capital key which reflects the respective member's share in the total population and gross domestic product of the EU. The ECB adjusts the shares every five years and whenever the number of contributing NCBs changes. The adjustment is made on the basis of data provided by the European Commission.", "title": "Organization" }, { "paragraph_id": 68, "text": "All national central banks (NCBs) that own a share of the ECB capital stock as of 1 February 2020 are listed below. Non-Euro area NCBs are required to pay up only a very small percentage of their subscribed capital, which accounts for the different magnitudes of Euro area and Non-Euro area total paid-up capital.", "title": "Organization" }, { "paragraph_id": 69, "text": "In addition to capital subscriptions, the NCBs of the member states participating in the euro area provided the ECB with foreign reserve assets equivalent to around €40 billion. The contributions of each NCB is in proportion to its share in the ECB's subscribed capital, while in return each NCB is credited by the ECB with a claim in euro equivalent to its contribution. 15% of the contributions was made in gold, and the remaining 85% in US dollars and UK pounds sterling.", "title": "Organization" }, { "paragraph_id": 70, "text": "The internal working language of the ECB is English, and press conferences are held in English. External communications are handled flexibly: English is preferred (though not exclusively) for communication within the ESCB (i.e. with other central banks) and with financial markets; communication with other national bodies and with EU citizens is normally in their respective language, but the ECB website is predominantly English; official documents such as the Annual Report are in the official languages of the EU (generally English, German and French).", "title": "Organization" }, { "paragraph_id": 71, "text": "In 2022, the ECB publishes for the first time details on the nationality of its staff, revealing an over-representation of Germans and Italians along the ECB employees, including in management positions.", "title": "Organization" }, { "paragraph_id": 72, "text": "The European Central Bank (and by extension, the Eurosystem) is often considered as the \"most independent central bank in the world\". In general terms, this means that the Eurosystem tasks and policies can be discussed, designed, decided and implemented in full autonomy, without pressure or need for instructions from any external body. The main justification for the ECB's independence is that such an institutional setup assists the maintenance of price stability.", "title": "Organization" }, { "paragraph_id": 73, "text": "In practice, the ECB's independence is pinned by four key principles:", "title": "Organization" }, { "paragraph_id": 74, "text": "In return to its high degree of independence and discretion, the ECB is accountable to the European Parliament (and to a lesser extent to the European Court of Auditors, the European Ombudsman and the Court of Justice of the EU (CJEU)). Although the accountability mechanisms are not enshrined in EU law, several practices were established following a resolution of the European Parliament adopted in 1998, which were informally agreed by the ECB, and incorporated into the Parliament's rule of procedure. In 2023, the European Parliament and the ECB made these accountability arrangements were made more formal by signing an exchange of letter.", "title": "Organization" }, { "paragraph_id": 75, "text": "The accountability framework involves five main mechanisms:", "title": "Organization" }, { "paragraph_id": 76, "text": "In 2013, an interinstitutional agreement was reached between the ECB and the European Parliament in the context of the establishment of the ECB's Banking Supervision. This agreement sets broader powers to the European Parliament than the established practice on the monetary policy side of the ECB's activities. For example, under the agreement, the Parliament can veto the appointment of the chair and vice-chair of the ECB's supervisory board and may approve removals if requested by the ECB.", "title": "Organization" }, { "paragraph_id": 77, "text": "In addition to its independence, the ECB is subject to limited transparency obligations in contrast to EU Institutions standards and other major central banks. Indeed, as pointed out by Transparency International, \"The Treaties establish transparency and openness as principles of the EU and its institutions. They do, however, grant the ECB a partial exemption from these principles. According to Art. 15(3) TFEU, the ECB is bound by the EU's transparency principles \"only when exercising [its] administrative tasks\" (the exemption – which leaves the term \"administrative tasks\" undefined – equally applies to the Court of Justice of the European Union and to the European Investment Bank).\"", "title": "Organization" }, { "paragraph_id": 78, "text": "In practice, there are several concrete examples where the ECB is less transparent than other institutions:", "title": "Organization" }, { "paragraph_id": 79, "text": "The bank is based in Ostend (East End), Frankfurt am Main. The city is the largest financial centre in the Eurozone and the bank's location in it is fixed by the Amsterdam Treaty. The bank moved to a new purpose-built headquarters in 2014, designed by a Vienna-based architectural office, Coop Himmelbau. The building is approximately 180 metres (591 ft) tall and is to be accompanied by other secondary buildings on a landscaped site on the site of the former wholesale market in the eastern part of Frankfurt am Main. The main construction on a 120,000 m total site area began in October 2008, and it was expected that the building would become an architectural symbol for Europe. While it was designed to accommodate double the number of staff who operated in the former Eurotower, that building has been retained by the ECB, owing to more space being required since it took responsibility for banking supervision.", "title": "Location" }, { "paragraph_id": 80, "text": "The debate on the independence of the ECB finds its origins in the preparatory stages of the construction of the EMU. The German government agreed to go ahead if certain crucial guarantees were respected, such as a European Central Bank independent of national governments and shielded from political pressure along the lines of the German central bank. The French government, for its part, feared that this independence would mean that politicians would no longer have any room for manoeuvre in the process. A compromise was then reached by establishing a regular dialogue between the ECB and the Council of Finance Ministers of the euro area, the Eurogroupe.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 81, "text": "There is strong consensus among economists on the value of central bank independence from politics. The rationale behind are both empirical and theoretical. On the theoretical side, it's believed that time inconsistency suggests the existence of political business cycles where elected officials might take advantage of policy surprises to secure reelection. The politician up to the election will therefore be incentivized to introduce expansionary monetary policies, reducing unemployment in the short run. These effects will be most likely temporary. By contrast, in the long run, it will increase inflation, with unemployment returning to the natural rate negating the positive effect. Furthermore, the credibility of the central bank will deteriorate, making it more difficult to answer the market. Additionally, empirical work has been done that defined and measured central bank independence (CBI), looking at the relationship of CBI with inflation.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 82, "text": "Demystify the independence of central bankers: According to Christopher Adolph (2009), the alleged neutrality of central bankers is only a legal façade and not an indisputable fact. To achieve this, the author analyses the professional careers of central bankers and mirrors them with their respective monetary decision-making. To explain the results of his analysis, he utilizes he uses the \"principal-agent\" theory. To explain that in order to create a new entity, one needs a delegator or principal (in this case the heads of state or government of the euro area) and a delegate or agent (in this case the ECB). In his illustration, he describes the financial community as a \"shadow principale\" which influences the choice of central bankers thus indicating that the central banks indeed act as interfaces between the financial world and the States. It is therefore not surprising, still according to the author, to regain their influence and preferences in the appointment of central bankers, presumed conservative, neutral and impartial according to the model of the Independent Central Bank (ICB), which eliminates this famous \"temporal inconsistency\". Central bankers had a professional life before joining the central bank and their careers will most likely continue after their tenure. They are ultimately human beings. Therefore, for the author, central bankers have interests of their own, based on their past careers and their expectations after joining the ECB, and try to send messages to their future potential employers.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 83, "text": "The crisis: an opportunity to impose its will and extend its powers:", "title": "Debates surrounding the ECB" }, { "paragraph_id": 84, "text": "– Its participation in the troika: Thanks to its three factors which explain its independence, the ECB took advantage of this crisis to implement, through its participation in the troika, the famous structural reforms in the Member States aimed at making, more flexible the various markets, particularly the labour market, which are still considered too rigid under the ordoliberal concept.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 85, "text": "- Macro-prudential supervision : At the same time, taking advantage of the reform of the financial supervision system, the Frankfurt Bank has acquired new responsibilities, such as macro-prudential supervision, in other words, supervision of the provision of financial services.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 86, "text": "-Take liberties with its mandate to save the Euro : Paradoxically, the crisis undermined the ECB's ordoliberal discourse \"because some of its instruments, which it had to implement, deviated significantly from its principles. It then interpreted the paradigm with enough flexibly to adapt its original reputation to these new economic conditions. It was forced to do so as a last resort to save its one and only raison d'être: the euro. This Independent was thus obliged to be pragmatic by departing from the spirit of its statutes, which is unacceptable to the hardest supporters of ordoliberalism, which will lead to the resignation of the two German leaders present within the ECB: the governor of the Bundesbank, Jens WEIDMANN and the member of the executive board of the ECB, Jürgen STARK.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 87, "text": "– Regulation of the financial system : The delegation of this new function to the ECB was carried out with great simplicity and with the consent of European leaders, because neither the Commission nor the Member States really wanted to obtain the monitoring of financial abuses throughout the area. In other words, in the event of a new financial crisis, the ECB would be the perfect scapegoat.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 88, "text": "- Capturing exchange rate policy : The event that will most mark the definitive politicization of the ECB is, of course, the operation launched in January 2015: the quantitative easing (QE) operation. Indeed, the Euro is an overvalued currency on the world markets against the dollar and the Euro zone is at risk of deflation. In addition, Member States find themselves heavily indebted, partly due to the rescue of their national banks. The ECB, as the guardian of the stability of the euro zone, is deciding to gradually buy back more than EUR 1 100 billion Member States' public debt. In this way, money is injected back into the economy, the euro depreciates significantly, prices rise, the risk of deflation is removed, and Member States reduce their debts. However, the ECB has just given itself the right to direct the exchange rate policy of the euro zone without this being granted by the Treaties or with the approval of European leaders, and without public opinion or the public arena being aware of this.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 89, "text": "In conclusion, for those in favour of a framework for ECB independence, there is a clear concentration of powers. In the light of these facts, it is clear that the ECB is no longer the simple guardian of monetary stability in the euro area, but has become, over the course of the crisis, a \"multi-competent economic player, at ease in this role that no one, especially not the agnostic governments of the euro Member States, seems to have the idea of challenging\". This new political super-actor, having captured many spheres of competence and a very strong influence in the economic field in the broad sense (economy, finance, budget...). This new political super-actor can no longer act alone and refuse a counter-power, consubstantial to our liberal democracies. Indeed, the status of independence which the ECB enjoys by essence should not exempt it from a real responsibility regarding the democratic process.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 90, "text": "In the aftermath of the euro area crisis, several proposals for a countervailing power were put forward, to deal with criticisms of a democratic deficit. For the German economist German Issing (2001) the ECB as a democratic responsibility and should be more transparent. According to him, this transparency could bring several advantages as the improvement of efficiency and credibility by giving the public adequate information. Others think that the ECB should have a closer relationship with the European Parliament which could play a major role in the evaluation of the democratic responsibility of the ECB. The development of new institutions or the creation of a minister is another solution proposed:", "title": "Debates surrounding the ECB" }, { "paragraph_id": 91, "text": "A minister for the Eurozone ?", "title": "Debates surrounding the ECB" }, { "paragraph_id": 92, "text": "The idea of a eurozone finance minister is regularly raised and supported by certain political figures, including Emmanuel Macron, as well as former German Chancellor Angela Merkel, former President of the ECB Jean-Claude Trichet and former European Commissioner Pierre Moscovici. For the latter, this position would bring \"more democratic legitimacy\" and \"more efficiency\" to European politics. In his view, it is a question of merging the powers of Commissioner for the Economy and Finance with those of the President of the Eurogroup.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 93, "text": "The main task of this minister would be to \"represent a strong political authority protecting the economic and budgetary interests of the euro area as a whole, and not the interests of individual Member States\". According to the Jacques Delors Institute, its competencies could be as follows:", "title": "Debates surrounding the ECB" }, { "paragraph_id": 94, "text": "For Jean-Claude Trichet, this minister could also rely on the Eurogroup working group for the preparation and follow-up of meetings in eurozone format, and on the Economic and Financial Committee for meetings concerning all Member States. He would also have under his authority a General Secretariat of the Treasury of the euro area, whose tasks would be determined by the objectives of the budgetary union currently being set up", "title": "Debates surrounding the ECB" }, { "paragraph_id": 95, "text": "This proposal was nevertheless rejected in 2017 by the Eurogroup, its president, Jeroen Dijsselbloem, spoke of the importance of this institution in relation to the European Commission.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 96, "text": "Towards democratic institutions ?", "title": "Debates surrounding the ECB" }, { "paragraph_id": 97, "text": "The absence of democratic institutions such as a Parliament or a real government is a regular criticism of the ECB in its management of the euro area, and many proposals have been made in this respect, particularly after the economic crisis, which would have shown the need to improve the governance of the euro area. For Moïse Sidiropoulos, a professor in economy: \"The crisis in the euro zone came as no surprise, because the euro remains an unfinished currency, a stateless currency with a fragile political legitimacy\".", "title": "Debates surrounding the ECB" }, { "paragraph_id": 98, "text": "French economist Thomas Piketty wrote on his blog in 2017 that it was essential to equip the eurozone with democratic institutions. An economic government could for example enable it to have a common budget, common taxes and borrowing and investment capacities. Such a government would then make the euro area more democratic and transparent by avoiding the opacity of a council such as the Eurogroup.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 99, "text": "Nevertheless, according to him \"there is no point in talking about a government of the eurozone if we do not say to which democratic body this government will be accountable\", a real parliament of the eurozone to which a finance minister would be accountable seems to be the real priority for the economist, who also denounces the lack of action in this area.", "title": "Debates surrounding the ECB" }, { "paragraph_id": 100, "text": "The creation of a sub-committee within the current European Parliament was also mentioned, in the model of the Eurogroup, which is currently an under-formation of the ECOFIN Committee. This would require a simple amendment to the rules of procedure and would avoid a competitive situation between two separate parliamentary assemblies. The former President of the European Commission had, moreover, stated on this subject that he had \"no sympathy for the idea of a specific Eurozone Parliament\".", "title": "Debates surrounding the ECB" } ]
The European Central Bank (ECB) is the prime component of the Eurosystem and the European System of Central Banks (ESCB) as well as one of seven institutions of the European Union. It is one of the world's most important central banks. The ECB Governing Council makes monetary policy for the Eurozone and the European Union, administers the foreign exchange reserves of EU member states, engages in foreign exchange operations, and defines the intermediate monetary objectives and key interest rate of the EU. The ECB Executive Board enforces the policies and decisions of the Governing Council, and may direct the national central banks when doing so. The ECB has the exclusive right to authorise the issuance of euro banknotes. Member states can issue euro coins, but the volume must be approved by the ECB beforehand. The bank also operates the TARGET2 payments system. The ECB was established by the Treaty of Amsterdam in May 1999 with the purpose of guaranteeing and maintaining price stability. On 1 December 2009, the Treaty of Lisbon became effective and the bank gained the official status of an EU institution. When the ECB was created, it covered a Eurozone of eleven members. Since then, Greece joined in January 2001, Slovenia in January 2007, Cyprus and Malta in January 2008, Slovakia in January 2009, Estonia in January 2011, Latvia in January 2014, Lithuania in January 2015 and Croatia in January 2023. The current President of the ECB is Christine Lagarde. Seated in Frankfurt, Germany, the bank formerly occupied the Eurotower prior to the construction of its new seat. The ECB is directly governed by European Union law. Its capital stock, worth €11 billion, is owned by all 27 central banks of the EU member states as shareholders. The initial capital allocation key was determined in 1998 on the basis of the states' population and GDP, but the capital key has been readjusted since. Shares in the ECB are not transferable and cannot be used as collateral.
2001-07-27T10:44:08Z
2023-12-31T07:42:50Z
[ "Template:Infobox central bank", "Template:Expand section", "Template:Main", "Template:Nowrap", "Template:Flagicon", "Template:Reflist", "Template:Cbignore", "Template:ISBN", "Template:Use British English", "Template:As of", "Template:One source", "Template:Efn", "Template:Politics of the European Union", "Template:Authority control", "Template:See also", "Template:Navboxes", "Template:Convert", "Template:Cite news", "Template:Cite press release", "Template:Portal bar", "Template:Legend", "Template:Legend-line", "Template:Citation needed", "Template:Interlanguage link", "Template:Cite journal", "Template:Official website", "Template:Portal", "Template:Notelist", "Template:Registration required", "Template:Commons category", "Template:Wikisource", "Template:Essay-like", "Template:Cite web", "Template:Cite book", "Template:Short description", "Template:Distinguish", "Template:Use dmy dates", "Template:Further", "Template:Citation" ]
https://en.wikipedia.org/wiki/European_Central_Bank
9,476
Electron
The electron (e or β) is a subatomic particle with a negative one elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron's mass is approximately 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. Being fermions, no two electrons can occupy the same quantum state, per the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: They can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry, and thermal conductivity; they also participate in gravitational, electromagnetic, and weak interactions. Since an electron has charge, it has a surrounding electric field; if that electron is moving relative to an observer, the observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications, such as tribology or frictional charging, electrolysis, electrochemistry, battery technologies, electronics, welding, cathode-ray tubes, photoelectricity, photovoltaic solar panels, electron microscopes, radiation therapy, lasers, gaseous ionization detectors, and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode-ray tube experiment. Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance, when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron, except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons. The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise De Magnete, the English scientist William Gilbert coined the Neo-Latin term electrica, to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both electric and electricity are derived from the Latin ēlectrum (also the root of the alloy of the same name), which came from the Greek word for amber, ἤλεκτρον (ēlektron). In the early 1700s, French chemist Charles François du Fay found that if a charged gold-leaf is repulsed by glass rubbed with silk, then the same charged gold-leaf is attracted by amber rubbed with wool. From this and other results of similar types of experiments, du Fay concluded that electricity consists of two electrical fluids, vitreous fluid from glass rubbed with silk and resinous fluid from amber rubbed with wool. These two fluids can neutralize each other when combined. American scientist Ebenezer Kinnersley later also independently reached the same conclusion. A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (−). He gave them the modern charge nomenclature of positive and negative respectively. Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit. Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist Wilhelm Eduard Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis. However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity". Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron. The word electron is a combination of the words electric and ion. The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron. While studying electrical conductivity in rarefied gases in 1859, the German physicist Julius Plücker observed the radiation emitted from the cathode caused phosphorescent light to appear on the tube wall near the cathode; and the region of the phosphorescent light could be moved by application of a magnetic field. In 1869, Plücker's student Johann Wilhelm Hittorf found that a solid body placed in between the cathode and the phosphorescence would cast a shadow upon the phosphorescent region of the tube. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. In 1876, the German physicist Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which distinguished between the rays that were emitted from the cathode and the incandescent light. Goldstein dubbed the rays cathode rays. Decades of experimental and theoretical research involving cathode rays were important in J. J. Thomson's eventual discovery of electrons. During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode-ray tube to have a high vacuum inside. He then showed in 1874 that the cathode rays can turn a small paddle wheel when placed in their path. Therefore, he concluded that the rays carried momentum. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged. In 1879, he proposed that these properties could be explained by regarding cathode rays as composed of negatively charged gaseous molecules in a fourth state of matter in which the mean free path of the particles is so long that collisions may be ignored. The German-born British physicist Arthur Schuster expanded upon Crookes's experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given electric and magnetic field, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time. This is because it was assumed that the charge carriers were much heavier hydrogen or nitrogen atoms. Schuster's estimates would subsequently turn out to be largely correct. In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge. While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter. In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays. This evidence strengthened the view that electrons existed as components of atoms. In 1897, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier. Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles", had perhaps one thousandth of the mass of the least massive ion known: hydrogen. He showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal. The name electron was adopted for these particles by the scientific community, mainly due to the advocation by G. F. FitzGerald, J. Larmor, and H. A. Lorentz. In the same year Emil Wiechert and Walter Kaufmann also calculated the e/m ratio but they failed short of interpreting their results while J. J. Thomson would subsequently in 1899 give estimates for the electron charge and mass as well: e~6.8×10 esu and m~3×10 g The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team, using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913. However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time. Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons. By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons. In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom. However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms. Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness". In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table, which were known to largely repeat themselves according to the periodic law. In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle. The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting. In his 1924 dissertation Recherches sur la théorie des quanta (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson and Alexander Reid discovered the interference effect was produced when a beam of electrons was passed through thin celluloid foils and later metal films, and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned. De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen. In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons and using electron as a generic term to describe both the positively and negatively charged variants. In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s. With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light. With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics. Individual electrons can now be easily confined in ultra small (L = 20 nm, W = 20 nm) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor. In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin 1/2. The invariant mass of an electron is approximately 9.109×10 kilograms, or 5.489×10 atomic mass units. Due to mass–energy equivalence, this corresponds to a rest energy of 0.511 MeV (8.19×10 J). The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe. Electrons have an electric charge of −1.602176634×10 coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. The electron is commonly symbolized by e, and the positron is symbolized by e. The electron has an intrinsic angular momentum or spin of ħ/2. This property is usually stated by referring to the electron as a spin-1/2 particle. For such particles the spin magnitude is ħ/2, while the result of the measurement of a projection of the spin on any axis can only be ±ħ/2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton, which is a physical constant equal to 9.27400915(23)×10 joules per tesla. The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity. The electron has no known substructure. Nevertheless, in condensed matter physics, spin–charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles. The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10 meters. The upper bound of the electron radius of 10 meters can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of 2.8179×10 m, greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron. There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of 2.2×10 seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is 6.6×10 years, at a 90% confidence level. As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density. Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, ψ(r1, r2) = −ψ(r2, r1), where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead. In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit. In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, ħ ≈ 6.6×10 eV·s. Thus, for a virtual electron, Δt is at most 1.3×10 s. While an electron–positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron. The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics. The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance. An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field. The Ampère–Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic). When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation. The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself. Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The deceleration of the electron results in the emission of Bremsstrahlung radiation. An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength. For an electron, it has a value of 2.43×10 m. When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering. The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by α ≈ 7.297353×10, which is approximately equal to 1/137. When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV. On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus. In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a W and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a Z exchange, and this is responsible for neutrino-electron elastic scattering. An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus's electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number. Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect. To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron. The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital (so called, paired electrons) cancel each other out. The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms. A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei. If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect. Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass. When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations. At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas) through the material much like free electrons. Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material. Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law, which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current. When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance. (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.) However, the mechanism by which higher temperature superconductors operate remains uncertain. Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons. The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge. According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation. The effects of special relativity are based on a quantity known as the Lorentz factor, defined as γ = 1 / 1 − v 2 / c 2 {\displaystyle \scriptstyle \gamma =1/{\sqrt {1-{v^{2}}/{c^{2}}}}} where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is: where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV. Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum. For the 51 GeV electron above, the wavelength is about 2.4×10 m, small enough to explore structures well below the size of an atomic nucleus. The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons: An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe. For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron-positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe. The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes. Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process, For about the next 300000–400000 years, the excess electrons remained too energetic to bind with atomic nuclei. What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation. Roughly one million years after the big bang, the first generation of stars began to form. Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus. An example is the cobalt-60 (Co) isotope, which decays to form nickel-60 (Ni). At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole. According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants. When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space. In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes. Cosmic rays are particles traveling through space with high energies. Energy events as high as 3.0×10 eV have been recorded. When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions. More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion. A muon, in turn, can decay to form an electron or positron. Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes. The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct dark lines appear in the spectrum of transmitted radiation in places where the corresponding frequency is absorbed by the atom's electrons. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. When detected, spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined. In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge. The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months. The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant. The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time. The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material. Electron beams are used in welding. They allow energy densities up to 10 W·cm across a narrow focus diameter of 0.1–1.3 mm and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding. Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer. This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits. Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products. Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy. Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays. Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect. Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies . Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV. The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°. The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material. In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm. By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential. The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms. This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain. Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface. In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery. Electrons are important in cathode-ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets. In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse. Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.
[ { "paragraph_id": 0, "text": "The electron (e or β) is a subatomic particle with a negative one elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron's mass is approximately 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. Being fermions, no two electrons can occupy the same quantum state, per the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: They can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy.", "title": "" }, { "paragraph_id": 1, "text": "Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry, and thermal conductivity; they also participate in gravitational, electromagnetic, and weak interactions. Since an electron has charge, it has a surrounding electric field; if that electron is moving relative to an observer, the observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications, such as tribology or frictional charging, electrolysis, electrochemistry, battery technologies, electronics, welding, cathode-ray tubes, photoelectricity, photovoltaic solar panels, electron microscopes, radiation therapy, lasers, gaseous ionization detectors, and particle accelerators.", "title": "" }, { "paragraph_id": 2, "text": "Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode-ray tube experiment. Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance, when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron, except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.", "title": "" }, { "paragraph_id": 3, "text": "The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise De Magnete, the English scientist William Gilbert coined the Neo-Latin term electrica, to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both electric and electricity are derived from the Latin ēlectrum (also the root of the alloy of the same name), which came from the Greek word for amber, ἤλεκτρον (ēlektron).", "title": "History" }, { "paragraph_id": 4, "text": "In the early 1700s, French chemist Charles François du Fay found that if a charged gold-leaf is repulsed by glass rubbed with silk, then the same charged gold-leaf is attracted by amber rubbed with wool. From this and other results of similar types of experiments, du Fay concluded that electricity consists of two electrical fluids, vitreous fluid from glass rubbed with silk and resinous fluid from amber rubbed with wool. These two fluids can neutralize each other when combined. American scientist Ebenezer Kinnersley later also independently reached the same conclusion. A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (−). He gave them the modern charge nomenclature of positive and negative respectively. Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.", "title": "History" }, { "paragraph_id": 5, "text": "Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist Wilhelm Eduard Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a \"single definite quantity of electricity\", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis. However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which \"behaves like atoms of electricity\".", "title": "History" }, { "paragraph_id": 6, "text": "Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: \"... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron\". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron. The word electron is a combination of the words electric and ion. The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.", "title": "History" }, { "paragraph_id": 7, "text": "While studying electrical conductivity in rarefied gases in 1859, the German physicist Julius Plücker observed the radiation emitted from the cathode caused phosphorescent light to appear on the tube wall near the cathode; and the region of the phosphorescent light could be moved by application of a magnetic field. In 1869, Plücker's student Johann Wilhelm Hittorf found that a solid body placed in between the cathode and the phosphorescence would cast a shadow upon the phosphorescent region of the tube. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. In 1876, the German physicist Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which distinguished between the rays that were emitted from the cathode and the incandescent light. Goldstein dubbed the rays cathode rays. Decades of experimental and theoretical research involving cathode rays were important in J. J. Thomson's eventual discovery of electrons.", "title": "History" }, { "paragraph_id": 8, "text": "During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode-ray tube to have a high vacuum inside. He then showed in 1874 that the cathode rays can turn a small paddle wheel when placed in their path. Therefore, he concluded that the rays carried momentum. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged. In 1879, he proposed that these properties could be explained by regarding cathode rays as composed of negatively charged gaseous molecules in a fourth state of matter in which the mean free path of the particles is so long that collisions may be ignored.", "title": "History" }, { "paragraph_id": 9, "text": "The German-born British physicist Arthur Schuster expanded upon Crookes's experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given electric and magnetic field, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time. This is because it was assumed that the charge carriers were much heavier hydrogen or nitrogen atoms. Schuster's estimates would subsequently turn out to be largely correct.", "title": "History" }, { "paragraph_id": 10, "text": "In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge.", "title": "History" }, { "paragraph_id": 11, "text": "While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter. In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays. This evidence strengthened the view that electrons existed as components of atoms.", "title": "History" }, { "paragraph_id": 12, "text": "In 1897, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier. Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called \"corpuscles\", had perhaps one thousandth of the mass of the least massive ion known: hydrogen. He showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal. The name electron was adopted for these particles by the scientific community, mainly due to the advocation by G. F. FitzGerald, J. Larmor, and H. A. Lorentz. In the same year Emil Wiechert and Walter Kaufmann also calculated the e/m ratio but they failed short of interpreting their results while J. J. Thomson would subsequently in 1899 give estimates for the electron charge and mass as well: e~6.8×10 esu and m~3×10 g", "title": "History" }, { "paragraph_id": 13, "text": "The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team, using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913. However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.", "title": "History" }, { "paragraph_id": 14, "text": "Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons.", "title": "History" }, { "paragraph_id": 15, "text": "By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons. In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom. However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.", "title": "History" }, { "paragraph_id": 16, "text": "Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive \"concentric (nearly) spherical shells, all of equal thickness\". In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table, which were known to largely repeat themselves according to the periodic law.", "title": "History" }, { "paragraph_id": 17, "text": "In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle. The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.", "title": "History" }, { "paragraph_id": 18, "text": "In his 1924 dissertation Recherches sur la théorie des quanta (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson and Alexander Reid discovered the interference effect was produced when a beam of electrons was passed through thin celluloid foils and later metal films, and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned.", "title": "History" }, { "paragraph_id": 19, "text": "De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen.", "title": "History" }, { "paragraph_id": 20, "text": "In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons and using electron as a generic term to describe both the positively and negatively charged variants.", "title": "History" }, { "paragraph_id": 21, "text": "In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s.", "title": "History" }, { "paragraph_id": 22, "text": "With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light.", "title": "History" }, { "paragraph_id": 23, "text": "With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.", "title": "History" }, { "paragraph_id": 24, "text": "Individual electrons can now be easily confined in ultra small (L = 20 nm, W = 20 nm) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor.", "title": "History" }, { "paragraph_id": 25, "text": "In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin 1/2.", "title": "Characteristics" }, { "paragraph_id": 26, "text": "The invariant mass of an electron is approximately 9.109×10 kilograms, or 5.489×10 atomic mass units. Due to mass–energy equivalence, this corresponds to a rest energy of 0.511 MeV (8.19×10 J). The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe.", "title": "Characteristics" }, { "paragraph_id": 27, "text": "Electrons have an electric charge of −1.602176634×10 coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. The electron is commonly symbolized by e, and the positron is symbolized by e.", "title": "Characteristics" }, { "paragraph_id": 28, "text": "The electron has an intrinsic angular momentum or spin of ħ/2. This property is usually stated by referring to the electron as a spin-1/2 particle. For such particles the spin magnitude is ħ/2, while the result of the measurement of a projection of the spin on any axis can only be ±ħ/2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton, which is a physical constant equal to 9.27400915(23)×10 joules per tesla. The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.", "title": "Characteristics" }, { "paragraph_id": 29, "text": "The electron has no known substructure. Nevertheless, in condensed matter physics, spin–charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles.", "title": "Characteristics" }, { "paragraph_id": 30, "text": "The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10 meters. The upper bound of the electron radius of 10 meters can be derived using the uncertainty relation in energy. There is also a physical constant called the \"classical electron radius\", with the much larger value of 2.8179×10 m, greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.", "title": "Characteristics" }, { "paragraph_id": 31, "text": "There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of 2.2×10 seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is 6.6×10 years, at a 90% confidence level.", "title": "Characteristics" }, { "paragraph_id": 32, "text": "As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment.", "title": "Characteristics" }, { "paragraph_id": 33, "text": "The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density.", "title": "Characteristics" }, { "paragraph_id": 34, "text": "Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, ψ(r1, r2) = −ψ(r2, r1), where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead.", "title": "Characteristics" }, { "paragraph_id": 35, "text": "In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit.", "title": "Characteristics" }, { "paragraph_id": 36, "text": "In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be \"borrowed\" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, ħ ≈ 6.6×10 eV·s. Thus, for a virtual electron, Δt is at most 1.3×10 s.", "title": "Characteristics" }, { "paragraph_id": 37, "text": "While an electron–positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron.", "title": "Characteristics" }, { "paragraph_id": 38, "text": "The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.", "title": "Characteristics" }, { "paragraph_id": 39, "text": "The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the \"static\" of virtual particles around elementary particles at a close distance.", "title": "Characteristics" }, { "paragraph_id": 40, "text": "An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field. The Ampère–Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).", "title": "Characteristics" }, { "paragraph_id": 41, "text": "When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation. The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.", "title": "Characteristics" }, { "paragraph_id": 42, "text": "Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The deceleration of the electron results in the emission of Bremsstrahlung radiation.", "title": "Characteristics" }, { "paragraph_id": 43, "text": "An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength. For an electron, it has a value of 2.43×10 m. When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering.", "title": "Characteristics" }, { "paragraph_id": 44, "text": "The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by α ≈ 7.297353×10, which is approximately equal to 1/137.", "title": "Characteristics" }, { "paragraph_id": 45, "text": "When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV. On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.", "title": "Characteristics" }, { "paragraph_id": 46, "text": "In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a W and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a Z exchange, and this is responsible for neutrino-electron elastic scattering.", "title": "Characteristics" }, { "paragraph_id": 47, "text": "An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus's electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number.", "title": "Characteristics" }, { "paragraph_id": 48, "text": "Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect. To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.", "title": "Characteristics" }, { "paragraph_id": 49, "text": "The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital (so called, paired electrons) cancel each other out.", "title": "Characteristics" }, { "paragraph_id": 50, "text": "The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms. A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.", "title": "Characteristics" }, { "paragraph_id": 51, "text": "If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect.", "title": "Characteristics" }, { "paragraph_id": 52, "text": "Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass. When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.", "title": "Characteristics" }, { "paragraph_id": 53, "text": "At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas) through the material much like free electrons.", "title": "Characteristics" }, { "paragraph_id": 54, "text": "Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material.", "title": "Characteristics" }, { "paragraph_id": 55, "text": "Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law, which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current.", "title": "Characteristics" }, { "paragraph_id": 56, "text": "When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance. (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.) However, the mechanism by which higher temperature superconductors operate remains uncertain.", "title": "Characteristics" }, { "paragraph_id": 57, "text": "Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons. The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge.", "title": "Characteristics" }, { "paragraph_id": 58, "text": "According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.", "title": "Characteristics" }, { "paragraph_id": 59, "text": "The effects of special relativity are based on a quantity known as the Lorentz factor, defined as γ = 1 / 1 − v 2 / c 2 {\\displaystyle \\scriptstyle \\gamma =1/{\\sqrt {1-{v^{2}}/{c^{2}}}}} where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is:", "title": "Characteristics" }, { "paragraph_id": 60, "text": "where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV. Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum. For the 51 GeV electron above, the wavelength is about 2.4×10 m, small enough to explore structures well below the size of an atomic nucleus.", "title": "Characteristics" }, { "paragraph_id": 61, "text": "The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons:", "title": "Formation" }, { "paragraph_id": 62, "text": "An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.", "title": "Formation" }, { "paragraph_id": 63, "text": "For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron-positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe. The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes. Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,", "title": "Formation" }, { "paragraph_id": 64, "text": "For about the next 300000–400000 years, the excess electrons remained too energetic to bind with atomic nuclei. What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.", "title": "Formation" }, { "paragraph_id": 65, "text": "Roughly one million years after the big bang, the first generation of stars began to form. Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus. An example is the cobalt-60 (Co) isotope, which decays to form nickel-60 (Ni).", "title": "Formation" }, { "paragraph_id": 66, "text": "At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole. According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants.", "title": "Formation" }, { "paragraph_id": 67, "text": "When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space. In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.", "title": "Formation" }, { "paragraph_id": 68, "text": "Cosmic rays are particles traveling through space with high energies. Energy events as high as 3.0×10 eV have been recorded. When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions. More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion.", "title": "Formation" }, { "paragraph_id": 69, "text": "A muon, in turn, can decay to form an electron or positron.", "title": "Formation" }, { "paragraph_id": 70, "text": "Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.", "title": "Observation" }, { "paragraph_id": 71, "text": "The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct dark lines appear in the spectrum of transmitted radiation in places where the corresponding frequency is absorbed by the atom's electrons. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. When detected, spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.", "title": "Observation" }, { "paragraph_id": 72, "text": "In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge. The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months. The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.", "title": "Observation" }, { "paragraph_id": 73, "text": "The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time.", "title": "Observation" }, { "paragraph_id": 74, "text": "The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.", "title": "Observation" }, { "paragraph_id": 75, "text": "Electron beams are used in welding. They allow energy densities up to 10 W·cm across a narrow focus diameter of 0.1–1.3 mm and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.", "title": "Plasma applications" }, { "paragraph_id": 76, "text": "Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer. This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.", "title": "Plasma applications" }, { "paragraph_id": 77, "text": "Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products. Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.", "title": "Plasma applications" }, { "paragraph_id": 78, "text": "Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.", "title": "Plasma applications" }, { "paragraph_id": 79, "text": "Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect. Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies .", "title": "Plasma applications" }, { "paragraph_id": 80, "text": "Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV. The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.", "title": "Plasma applications" }, { "paragraph_id": 81, "text": "The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material. In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm. By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential. The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms. This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.", "title": "Plasma applications" }, { "paragraph_id": 82, "text": "Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.", "title": "Plasma applications" }, { "paragraph_id": 83, "text": "In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery.", "title": "Plasma applications" }, { "paragraph_id": 84, "text": "Electrons are important in cathode-ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets. In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse. Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.", "title": "Plasma applications" }, { "paragraph_id": 85, "text": "", "title": "External links" } ]
The electron is a subatomic particle with a negative one elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron's mass is approximately 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. Being fermions, no two electrons can occupy the same quantum state, per the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: They can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry, and thermal conductivity; they also participate in gravitational, electromagnetic, and weak interactions. Since an electron has charge, it has a surrounding electric field; if that electron is moving relative to an observer, the observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications, such as tribology or frictional charging, electrolysis, electrochemistry, battery technologies, electronics, welding, cathode-ray tubes, photoelectricity, photovoltaic solar panels, electron microscopes, radiation therapy, lasers, gaseous ionization detectors, and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode-ray tube experiment. Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance, when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron, except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.
2001-09-26T02:01:08Z
2023-11-23T22:25:41Z
[ "Template:Pp-move", "Template:Rp", "Template:Nowrap", "Template:Webarchive", "Template:Quantum field theories", "Template:Quantum gravity", "Template:Other uses", "Template:Val", "Template:Cite conference", "Template:Particles", "Template:Main", "Template:Cite magazine", "Template:Quantum electrodynamics", "Template:See also", "Template:Clear", "Template:Quantum information", "Template:SimpleNuclide", "Template:Cite book", "Template:Cite web", "Template:EB1911 poster", "Template:Standard model of particle physics", "Template:Efn", "Template:Further", "Template:Sfrac", "Template:Commons category", "Template:Cite arXiv", "Template:Wikiquote", "Template:Authority control", "Template:Infobox particle", "Template:Convert", "Template:Portal", "Template:Citation", "Template:Featured article", "Template:SubatomicParticle", "Template:Cmn", "Template:Cite press release", "Template:Radiation oncology", "Template:Reflist", "Template:Cite journal", "Template:Cite news", "Template:Dead link", "Template:Short description", "Template:Mvar", "Template:Lang", "Template:Notelist" ]
https://en.wikipedia.org/wiki/Electron
9,477
Europium
Europium is a chemical element; it has symbol Eu and atomic number 63. Europium is a silvery-white metal of the lanthanide series that reacts readily with air to form a dark oxide coating. It is the most chemically reactive, least dense, and softest of the lanthanide elements. It is soft enough to be cut with a knife. Europium was isolated in 1901 and named after the continent of Europe. Europium usually assumes the oxidation state +3, like other members of the lanthanide series, but compounds having oxidation state +2 are also common. All europium compounds with oxidation state +2 are slightly reducing. Europium has no significant biological role and is relatively non-toxic compared to other heavy metals. Most applications of europium exploit the phosphorescence of europium compounds. Europium is one of the rarest of the rare-earth elements on Earth. Europium is a ductile metal with a hardness similar to that of lead. It crystallizes in a body-centered cubic lattice. Some properties of europium are strongly influenced by its half-filled electron shell. Europium has the second lowest melting point and the lowest density of all lanthanides. Europium has been claimed to become a superconductor when it is cooled below 1.8 K and compressed to above 80 GPa. However the experimental evidence on which this claim is based has been challenged, and the paper reporting superconductivity has been subsequently retracted. If it becomes a superconductor this is believed to occur because europium is divalent in the metallic state, and is converted into the trivalent state by the applied pressure. In the divalent state, the strong local magnetic moment (arising from total electronic angular momentum J = /2) suppresses the superconductivity, which is induced by eliminating this local moment (J = 0 in Eu). Europium is the most reactive rare-earth element. It rapidly oxidizes in air, so that bulk oxidation of a centimeter-sized sample occurs within several days. Its reactivity with water is comparable to that of calcium, and the reaction is Because of the high reactivity, samples of solid europium rarely have the shiny appearance of the fresh metal, even when coated with a protective layer of mineral oil. Europium ignites in air at 150 to 180 °C to form europium(III) oxide: Europium dissolves readily in dilute sulfuric acid to form pale pink solutions of [Eu(H2O)9]: Although usually trivalent, europium readily forms divalent compounds. This behavior is unusual for most lanthanides, which almost exclusively form compounds with an oxidation state of +3. The +2 state has an electron configuration 4f because the half-filled f-shell provides more stability. In terms of size and coordination number, europium(II) and barium(II) are similar. The sulfates of both barium and europium(II) are also highly insoluble in water. Divalent europium is a mild reducing agent, oxidizing in air to form Eu(III) compounds. In anaerobic, and particularly geothermal conditions, the divalent form is sufficiently stable that it tends to be incorporated into minerals of calcium and the other alkaline earths. This ion-exchange process is the basis of the "negative europium anomaly", the low europium content in many lanthanide minerals such as monazite, relative to the chondritic abundance. Bastnäsite tends to show less of a negative europium anomaly than does monazite, and hence is the major source of europium today. The development of easy methods to separate divalent europium from the other (trivalent) lanthanides made europium accessible even when present in low concentration, as it usually is. Naturally occurring europium is composed of two isotopes, Eu and Eu, which occur in almost equal proportions; Eu is slightly more abundant (52.2% natural abundance). While Eu is stable, Eu was found to be unstable to alpha decay with a half-life of 5+11−3×10 years in 2007, giving about one alpha decay per two minutes in every kilogram of natural europium. This value is in reasonable agreement with theoretical predictions. Besides the natural radioisotope Eu, 35 artificial radioisotopes have been characterized, the most stable being Eu with a half-life of 36.9 years, Eu with a half-life of 13.516 years, and Eu with a half-life of 8.593 years. All the remaining radioactive isotopes have half-lives shorter than 4.7612 years, and the majority of these have half-lives shorter than 12.2 seconds; the known isotopes of europium range from Eu to Eu. This element also has 17 meta states, with the most stable being Eu (t1/2=12.8 hours), Eu (t1/2=9.3116 hours) and Eu (t1/2=96 minutes). The primary decay mode for isotopes lighter than Eu is electron capture, and the primary mode for heavier isotopes is beta minus decay. The primary decay products before Eu are isotopes of samarium (Sm) and the primary products after are isotopes of gadolinium (Gd). Europium is produced by nuclear fission, but the fission product yields of europium isotopes are low near the top of the mass range for fission products. As with other lanthanides, many isotopes of europium, especially those that have odd mass numbers or are neutron-poor like Eu, have high cross sections for neutron capture, often high enough to be neutron poisons. Eu is the beta decay product of samarium-151, but since this has a long decay half-life and short mean time to neutron absorption, most Sm instead ends up as Sm. Eu (half-life 13.516 years) and Eu (half-life 8.593 years) cannot be beta decay products because Sm and Sm are non-radioactive, but Eu is the only long-lived "shielded" nuclide, other than Cs, to have a fission yield of more than 2.5 parts per million fissions. A larger amount of Eu is produced by neutron activation of a significant portion of the non-radioactive Eu; however, much of this is further converted to Eu. Eu (half-life 4.7612 years) has a fission yield of 330 parts per million (ppm) for uranium-235 and thermal neutrons; most of it is transmuted to non-radioactive and nonabsorptive gadolinium-156 by the end of fuel burnup. Overall, europium is overshadowed by caesium-137 and strontium-90 as a radiation hazard, and by samarium and others as a neutron poison. Europium is not found in nature as a free element. Many minerals contain europium, with the most important sources being bastnäsite, monazite, xenotime and loparite-(Ce). No europium-dominant minerals are known yet, despite a single find of a tiny possible Eu–O or Eu–O–C system phase in the Moon's regolith. Depletion or enrichment of europium in minerals relative to other rare-earth elements is known as the europium anomaly. Europium is commonly included in trace element studies in geochemistry and petrology to understand the processes that form igneous rocks (rocks that cooled from magma or lava). The nature of the europium anomaly found helps reconstruct the relationships within a suite of igneous rocks. The average crustal abundance of europium is 2–2.2 ppm. Divalent europium (Eu) in small amounts is the activator of the bright blue fluorescence of some samples of the mineral fluorite (CaF2). The reduction from Eu to Eu is induced by irradiation with energetic particles. The most outstanding examples of this originated around Weardale and adjacent parts of northern England; it was the fluorite found here that fluorescence was named after in 1852, although it was not until much later that europium was determined to be the cause. In astrophysics, the signature of europium in stellar spectra can be used to classify stars and inform theories of how or where a particular star was born. For instance, astronomers in 2019 identified higher-than-expected levels of europium within the star J1124+4535, hypothesizing that this star originated in a dwarf galaxy that collided with the Milky Way billions of years ago. Europium is associated with the other rare-earth elements and is, therefore, mined together with them. Separation of the rare-earth elements occurs during later processing. Rare-earth elements are found in the minerals bastnäsite, loparite-(Ce), xenotime, and monazite in mineable quantities. Bastnäsite is a group of related fluorocarbonates, Ln(CO3)(F,OH). Monazite is a group of related of orthophosphate minerals LnPO4 (Ln denotes a mixture of all the lanthanides except promethium), loparite-(Ce) is an oxide, and xenotime is an orthophosphate (Y,Yb,Er,...)PO4. Monazite also contains thorium and yttrium, which complicates handling because thorium and its decay products are radioactive. For the extraction from the ore and the isolation of individual lanthanides, several methods have been developed. The choice of method is based on the concentration and composition of the ore and on the distribution of the individual lanthanides in the resulting concentrate. Roasting the ore, followed by acidic and basic leaching, is used mostly to produce a concentrate of lanthanides. If cerium is the dominant lanthanide, then it is converted from cerium(III) to cerium(IV) and then precipitated. Further separation by solvent extractions or ion exchange chromatography yields a fraction which is enriched in europium. This fraction is reduced with zinc, zinc/amalgam, electrolysis or other methods converting the europium(III) to europium(II). Europium(II) reacts in a way similar to that of alkaline earth metals and therefore it can be precipitated as a carbonate or co-precipitated with barium sulfate. Europium metal is available through the electrolysis of a mixture of molten EuCl3 and NaCl (or CaCl2) in a graphite cell, which serves as cathode, using graphite as anode. The other product is chlorine gas. A few large deposits produce or produced a significant amount of the world production. The Bayan Obo iron ore deposit in Inner Mongolia contains significant amounts of bastnäsite and monazite and is, with an estimated 36 million tonnes of rare-earth element oxides, the largest known deposit. The mining operations at the Bayan Obo deposit made China the largest supplier of rare-earth elements in the 1990s. Only 0.2% of the rare-earth element content is europium. The second large source for rare-earth elements between 1965 and its closure in the late 1990s was the Mountain Pass rare earth mine in California. The bastnäsite mined there is especially rich in the light rare-earth elements (La-Gd, Sc, and Y) and contains only 0.1% of europium. Another large source for rare-earth elements is the loparite found on the Kola peninsula. It contains besides niobium, tantalum and titanium up to 30% rare-earth elements and is the largest source for these elements in Russia. Europium compounds tend to exist in a trivalent oxidation state under most conditions. Commonly these compounds feature Eu(III) bound by 6–9 oxygenic ligands. The Eu(III) sulfates, nitrates and chlorides are soluble in water or polar organic solvents. Lipophilic europium complexes often feature acetylacetonate-like ligands, such as EuFOD. Europium metal reacts with all the halogens: This route gives white europium(III) fluoride (EuF3), yellow europium(III) chloride (EuCl3), gray europium(III) bromide (EuBr3), and colorless europium(III) iodide (EuI3). Europium also forms the corresponding dihalides: yellow-green europium(II) fluoride (EuF2), colorless europium(II) chloride (EuCl2) (although it has a bright blue fluorescence under UV light), colorless europium(II) bromide (EuBr2), and green europium(II) iodide (EuI2). Europium forms stable compounds with all of the chalcogens, but the heavier chalcogens (S, Se, and Te) stabilize the lower oxidation state. Three oxides are known: europium(II) oxide (EuO), europium(III) oxide (Eu2O3), and the mixed-valence oxide Eu3O4, consisting of both Eu(II) and Eu(III). Otherwise, the main chalcogenides are europium(II) sulfide (EuS), europium(II) selenide (EuSe) and europium(II) telluride (EuTe): all three of these are black solids. Europium(II) sulfide is prepared by sulfiding the oxide at temperatures sufficiently high to decompose the Eu2O3: The main nitride of europium is europium(III) nitride (EuN). Although europium is present in most of the minerals containing the other rare elements, due to the difficulties in separating the elements it was not until the late 1800s that the element was isolated. William Crookes observed the phosphorescent spectra of the rare elements including those eventually assigned to europium. Europium was first found in 1892 by Paul Émile Lecoq de Boisbaudran, who obtained basic fractions from samarium-gadolinium concentrates which had spectral lines not accounted for by samarium or gadolinium. However, the discovery of europium is generally credited to French chemist Eugène-Anatole Demarçay, who suspected samples of the recently discovered element samarium were contaminated with an unknown element in 1896 and who was able to isolate it in 1901; he then named it europium. When the europium-doped yttrium orthovanadate red phosphor was discovered in the early 1960s, and understood to be about to cause a revolution in the color television industry, there was a scramble for the limited supply of europium on hand among the monazite processors, as the typical europium content in monazite is about 0.05%. However, the Molycorp bastnäsite deposit at the Mountain Pass rare earth mine, California, whose lanthanides had an unusually high europium content of 0.1%, was about to come on-line and provide sufficient europium to sustain the industry. Prior to europium, the color-TV red phosphor was very weak, and the other phosphor colors had to be muted, to maintain color balance. With the brilliant red europium phosphor, it was no longer necessary to mute the other colors, and a much brighter color TV picture was the result. Europium has continued to be in use in the TV industry ever since as well as in computer monitors. Californian bastnäsite now faces stiff competition from Bayan Obo, China, with an even "richer" europium content of 0.2%. Frank Spedding, celebrated for his development of the ion-exchange technology that revolutionized the rare-earth industry in the mid-1950s, once related the story of how he was lecturing on the rare earths in the 1930s, when an elderly gentleman approached him with an offer of a gift of several pounds of europium oxide. This was an unheard-of quantity at the time, and Spedding did not take the man seriously. However, a package duly arrived in the mail, containing several pounds of genuine europium oxide. The elderly gentleman had turned out to be Herbert Newby McCoy, who had developed a famous method of europium purification involving redox chemistry. Relative to most other elements, commercial applications for europium are few and rather specialized. Almost invariably, its phosphorescence is exploited, either in the +2 or +3 oxidation state. It is a dopant in some types of glass in lasers and other optoelectronic devices. Europium oxide (Eu2O3) is widely used as a red phosphor in television sets and fluorescent lamps, and as an activator for yttrium-based phosphors. Color TV screens contain between 0.5 and 1 g of europium oxide. Whereas trivalent europium gives red phosphors, the luminescence of divalent europium depends strongly on the composition of the host structure. UV to deep red luminescence can be achieved. The two classes of europium-based phosphor (red and blue), combined with the yellow/green terbium phosphors give "white" light, the color temperature of which can be varied by altering the proportion or specific composition of the individual phosphors. This phosphor system is typically encountered in helical fluorescent light bulbs. Combining the same three classes is one way to make trichromatic systems in TV and computer screens, but as an additive, it can be particularly effective in improving the intensity of red phosphor. Europium is also used in the manufacture of fluorescent glass, increasing the general efficiency of fluorescent lamps. One of the more common persistent after-glow phosphors besides copper-doped zinc sulfide is europium-doped strontium aluminate. Europium fluorescence is used to interrogate biomolecular interactions in drug-discovery screens. It is also used in the anti-counterfeiting phosphors in euro banknotes. An application that has almost fallen out of use with the introduction of affordable superconducting magnets is the use of europium complexes, such as Eu(fod)3, as shift reagents in NMR spectroscopy. Chiral shift reagents, such as Eu(hfc)3, are still used to determine enantiomeric purity. There are no clear indications that europium is particularly toxic compared to other heavy metals. Europium chloride, nitrate and oxide have been tested for toxicity: europium chloride shows an acute intraperitoneal LD50 toxicity of 550 mg/kg and the acute oral LD50 toxicity is 5000 mg/kg. Europium nitrate shows a slightly higher intraperitoneal LD50 toxicity of 320 mg/kg, while the oral toxicity is above 5000 mg/kg. The metal dust presents a fire and explosion hazard.
[ { "paragraph_id": 0, "text": "Europium is a chemical element; it has symbol Eu and atomic number 63. Europium is a silvery-white metal of the lanthanide series that reacts readily with air to form a dark oxide coating. It is the most chemically reactive, least dense, and softest of the lanthanide elements. It is soft enough to be cut with a knife. Europium was isolated in 1901 and named after the continent of Europe. Europium usually assumes the oxidation state +3, like other members of the lanthanide series, but compounds having oxidation state +2 are also common. All europium compounds with oxidation state +2 are slightly reducing. Europium has no significant biological role and is relatively non-toxic compared to other heavy metals. Most applications of europium exploit the phosphorescence of europium compounds. Europium is one of the rarest of the rare-earth elements on Earth.", "title": "" }, { "paragraph_id": 1, "text": "Europium is a ductile metal with a hardness similar to that of lead. It crystallizes in a body-centered cubic lattice. Some properties of europium are strongly influenced by its half-filled electron shell. Europium has the second lowest melting point and the lowest density of all lanthanides.", "title": "Characteristics" }, { "paragraph_id": 2, "text": "Europium has been claimed to become a superconductor when it is cooled below 1.8 K and compressed to above 80 GPa. However the experimental evidence on which this claim is based has been challenged, and the paper reporting superconductivity has been subsequently retracted. If it becomes a superconductor this is believed to occur because europium is divalent in the metallic state, and is converted into the trivalent state by the applied pressure. In the divalent state, the strong local magnetic moment (arising from total electronic angular momentum J = /2) suppresses the superconductivity, which is induced by eliminating this local moment (J = 0 in Eu).", "title": "Characteristics" }, { "paragraph_id": 3, "text": "Europium is the most reactive rare-earth element. It rapidly oxidizes in air, so that bulk oxidation of a centimeter-sized sample occurs within several days. Its reactivity with water is comparable to that of calcium, and the reaction is", "title": "Characteristics" }, { "paragraph_id": 4, "text": "Because of the high reactivity, samples of solid europium rarely have the shiny appearance of the fresh metal, even when coated with a protective layer of mineral oil. Europium ignites in air at 150 to 180 °C to form europium(III) oxide:", "title": "Characteristics" }, { "paragraph_id": 5, "text": "Europium dissolves readily in dilute sulfuric acid to form pale pink solutions of [Eu(H2O)9]:", "title": "Characteristics" }, { "paragraph_id": 6, "text": "Although usually trivalent, europium readily forms divalent compounds. This behavior is unusual for most lanthanides, which almost exclusively form compounds with an oxidation state of +3. The +2 state has an electron configuration 4f because the half-filled f-shell provides more stability. In terms of size and coordination number, europium(II) and barium(II) are similar. The sulfates of both barium and europium(II) are also highly insoluble in water. Divalent europium is a mild reducing agent, oxidizing in air to form Eu(III) compounds. In anaerobic, and particularly geothermal conditions, the divalent form is sufficiently stable that it tends to be incorporated into minerals of calcium and the other alkaline earths. This ion-exchange process is the basis of the \"negative europium anomaly\", the low europium content in many lanthanide minerals such as monazite, relative to the chondritic abundance. Bastnäsite tends to show less of a negative europium anomaly than does monazite, and hence is the major source of europium today. The development of easy methods to separate divalent europium from the other (trivalent) lanthanides made europium accessible even when present in low concentration, as it usually is.", "title": "Characteristics" }, { "paragraph_id": 7, "text": "Naturally occurring europium is composed of two isotopes, Eu and Eu, which occur in almost equal proportions; Eu is slightly more abundant (52.2% natural abundance). While Eu is stable, Eu was found to be unstable to alpha decay with a half-life of 5+11−3×10 years in 2007, giving about one alpha decay per two minutes in every kilogram of natural europium. This value is in reasonable agreement with theoretical predictions. Besides the natural radioisotope Eu, 35 artificial radioisotopes have been characterized, the most stable being Eu with a half-life of 36.9 years, Eu with a half-life of 13.516 years, and Eu with a half-life of 8.593 years. All the remaining radioactive isotopes have half-lives shorter than 4.7612 years, and the majority of these have half-lives shorter than 12.2 seconds; the known isotopes of europium range from Eu to Eu. This element also has 17 meta states, with the most stable being Eu (t1/2=12.8 hours), Eu (t1/2=9.3116 hours) and Eu (t1/2=96 minutes).", "title": "Characteristics" }, { "paragraph_id": 8, "text": "The primary decay mode for isotopes lighter than Eu is electron capture, and the primary mode for heavier isotopes is beta minus decay. The primary decay products before Eu are isotopes of samarium (Sm) and the primary products after are isotopes of gadolinium (Gd).", "title": "Characteristics" }, { "paragraph_id": 9, "text": "Europium is produced by nuclear fission, but the fission product yields of europium isotopes are low near the top of the mass range for fission products.", "title": "Characteristics" }, { "paragraph_id": 10, "text": "As with other lanthanides, many isotopes of europium, especially those that have odd mass numbers or are neutron-poor like Eu, have high cross sections for neutron capture, often high enough to be neutron poisons.", "title": "Characteristics" }, { "paragraph_id": 11, "text": "Eu is the beta decay product of samarium-151, but since this has a long decay half-life and short mean time to neutron absorption, most Sm instead ends up as Sm.", "title": "Characteristics" }, { "paragraph_id": 12, "text": "Eu (half-life 13.516 years) and Eu (half-life 8.593 years) cannot be beta decay products because Sm and Sm are non-radioactive, but Eu is the only long-lived \"shielded\" nuclide, other than Cs, to have a fission yield of more than 2.5 parts per million fissions. A larger amount of Eu is produced by neutron activation of a significant portion of the non-radioactive Eu; however, much of this is further converted to Eu.", "title": "Characteristics" }, { "paragraph_id": 13, "text": "Eu (half-life 4.7612 years) has a fission yield of 330 parts per million (ppm) for uranium-235 and thermal neutrons; most of it is transmuted to non-radioactive and nonabsorptive gadolinium-156 by the end of fuel burnup.", "title": "Characteristics" }, { "paragraph_id": 14, "text": "Overall, europium is overshadowed by caesium-137 and strontium-90 as a radiation hazard, and by samarium and others as a neutron poison.", "title": "Characteristics" }, { "paragraph_id": 15, "text": "Europium is not found in nature as a free element. Many minerals contain europium, with the most important sources being bastnäsite, monazite, xenotime and loparite-(Ce). No europium-dominant minerals are known yet, despite a single find of a tiny possible Eu–O or Eu–O–C system phase in the Moon's regolith.", "title": "Characteristics" }, { "paragraph_id": 16, "text": "Depletion or enrichment of europium in minerals relative to other rare-earth elements is known as the europium anomaly. Europium is commonly included in trace element studies in geochemistry and petrology to understand the processes that form igneous rocks (rocks that cooled from magma or lava). The nature of the europium anomaly found helps reconstruct the relationships within a suite of igneous rocks. The average crustal abundance of europium is 2–2.2 ppm.", "title": "Characteristics" }, { "paragraph_id": 17, "text": "Divalent europium (Eu) in small amounts is the activator of the bright blue fluorescence of some samples of the mineral fluorite (CaF2). The reduction from Eu to Eu is induced by irradiation with energetic particles. The most outstanding examples of this originated around Weardale and adjacent parts of northern England; it was the fluorite found here that fluorescence was named after in 1852, although it was not until much later that europium was determined to be the cause.", "title": "Characteristics" }, { "paragraph_id": 18, "text": "In astrophysics, the signature of europium in stellar spectra can be used to classify stars and inform theories of how or where a particular star was born. For instance, astronomers in 2019 identified higher-than-expected levels of europium within the star J1124+4535, hypothesizing that this star originated in a dwarf galaxy that collided with the Milky Way billions of years ago.", "title": "Characteristics" }, { "paragraph_id": 19, "text": "Europium is associated with the other rare-earth elements and is, therefore, mined together with them. Separation of the rare-earth elements occurs during later processing. Rare-earth elements are found in the minerals bastnäsite, loparite-(Ce), xenotime, and monazite in mineable quantities. Bastnäsite is a group of related fluorocarbonates, Ln(CO3)(F,OH). Monazite is a group of related of orthophosphate minerals LnPO4 (Ln denotes a mixture of all the lanthanides except promethium), loparite-(Ce) is an oxide, and xenotime is an orthophosphate (Y,Yb,Er,...)PO4. Monazite also contains thorium and yttrium, which complicates handling because thorium and its decay products are radioactive. For the extraction from the ore and the isolation of individual lanthanides, several methods have been developed. The choice of method is based on the concentration and composition of the ore and on the distribution of the individual lanthanides in the resulting concentrate. Roasting the ore, followed by acidic and basic leaching, is used mostly to produce a concentrate of lanthanides. If cerium is the dominant lanthanide, then it is converted from cerium(III) to cerium(IV) and then precipitated. Further separation by solvent extractions or ion exchange chromatography yields a fraction which is enriched in europium. This fraction is reduced with zinc, zinc/amalgam, electrolysis or other methods converting the europium(III) to europium(II). Europium(II) reacts in a way similar to that of alkaline earth metals and therefore it can be precipitated as a carbonate or co-precipitated with barium sulfate. Europium metal is available through the electrolysis of a mixture of molten EuCl3 and NaCl (or CaCl2) in a graphite cell, which serves as cathode, using graphite as anode. The other product is chlorine gas.", "title": "Production" }, { "paragraph_id": 20, "text": "A few large deposits produce or produced a significant amount of the world production. The Bayan Obo iron ore deposit in Inner Mongolia contains significant amounts of bastnäsite and monazite and is, with an estimated 36 million tonnes of rare-earth element oxides, the largest known deposit. The mining operations at the Bayan Obo deposit made China the largest supplier of rare-earth elements in the 1990s. Only 0.2% of the rare-earth element content is europium. The second large source for rare-earth elements between 1965 and its closure in the late 1990s was the Mountain Pass rare earth mine in California. The bastnäsite mined there is especially rich in the light rare-earth elements (La-Gd, Sc, and Y) and contains only 0.1% of europium. Another large source for rare-earth elements is the loparite found on the Kola peninsula. It contains besides niobium, tantalum and titanium up to 30% rare-earth elements and is the largest source for these elements in Russia.", "title": "Production" }, { "paragraph_id": 21, "text": "Europium compounds tend to exist in a trivalent oxidation state under most conditions. Commonly these compounds feature Eu(III) bound by 6–9 oxygenic ligands. The Eu(III) sulfates, nitrates and chlorides are soluble in water or polar organic solvents. Lipophilic europium complexes often feature acetylacetonate-like ligands, such as EuFOD.", "title": "Compounds" }, { "paragraph_id": 22, "text": "Europium metal reacts with all the halogens:", "title": "Compounds" }, { "paragraph_id": 23, "text": "This route gives white europium(III) fluoride (EuF3), yellow europium(III) chloride (EuCl3), gray europium(III) bromide (EuBr3), and colorless europium(III) iodide (EuI3). Europium also forms the corresponding dihalides: yellow-green europium(II) fluoride (EuF2), colorless europium(II) chloride (EuCl2) (although it has a bright blue fluorescence under UV light), colorless europium(II) bromide (EuBr2), and green europium(II) iodide (EuI2).", "title": "Compounds" }, { "paragraph_id": 24, "text": "Europium forms stable compounds with all of the chalcogens, but the heavier chalcogens (S, Se, and Te) stabilize the lower oxidation state. Three oxides are known: europium(II) oxide (EuO), europium(III) oxide (Eu2O3), and the mixed-valence oxide Eu3O4, consisting of both Eu(II) and Eu(III). Otherwise, the main chalcogenides are europium(II) sulfide (EuS), europium(II) selenide (EuSe) and europium(II) telluride (EuTe): all three of these are black solids. Europium(II) sulfide is prepared by sulfiding the oxide at temperatures sufficiently high to decompose the Eu2O3:", "title": "Compounds" }, { "paragraph_id": 25, "text": "The main nitride of europium is europium(III) nitride (EuN).", "title": "Compounds" }, { "paragraph_id": 26, "text": "Although europium is present in most of the minerals containing the other rare elements, due to the difficulties in separating the elements it was not until the late 1800s that the element was isolated. William Crookes observed the phosphorescent spectra of the rare elements including those eventually assigned to europium.", "title": "History" }, { "paragraph_id": 27, "text": "Europium was first found in 1892 by Paul Émile Lecoq de Boisbaudran, who obtained basic fractions from samarium-gadolinium concentrates which had spectral lines not accounted for by samarium or gadolinium. However, the discovery of europium is generally credited to French chemist Eugène-Anatole Demarçay, who suspected samples of the recently discovered element samarium were contaminated with an unknown element in 1896 and who was able to isolate it in 1901; he then named it europium.", "title": "History" }, { "paragraph_id": 28, "text": "When the europium-doped yttrium orthovanadate red phosphor was discovered in the early 1960s, and understood to be about to cause a revolution in the color television industry, there was a scramble for the limited supply of europium on hand among the monazite processors, as the typical europium content in monazite is about 0.05%. However, the Molycorp bastnäsite deposit at the Mountain Pass rare earth mine, California, whose lanthanides had an unusually high europium content of 0.1%, was about to come on-line and provide sufficient europium to sustain the industry. Prior to europium, the color-TV red phosphor was very weak, and the other phosphor colors had to be muted, to maintain color balance. With the brilliant red europium phosphor, it was no longer necessary to mute the other colors, and a much brighter color TV picture was the result. Europium has continued to be in use in the TV industry ever since as well as in computer monitors. Californian bastnäsite now faces stiff competition from Bayan Obo, China, with an even \"richer\" europium content of 0.2%.", "title": "History" }, { "paragraph_id": 29, "text": "Frank Spedding, celebrated for his development of the ion-exchange technology that revolutionized the rare-earth industry in the mid-1950s, once related the story of how he was lecturing on the rare earths in the 1930s, when an elderly gentleman approached him with an offer of a gift of several pounds of europium oxide. This was an unheard-of quantity at the time, and Spedding did not take the man seriously. However, a package duly arrived in the mail, containing several pounds of genuine europium oxide. The elderly gentleman had turned out to be Herbert Newby McCoy, who had developed a famous method of europium purification involving redox chemistry.", "title": "History" }, { "paragraph_id": 30, "text": "Relative to most other elements, commercial applications for europium are few and rather specialized. Almost invariably, its phosphorescence is exploited, either in the +2 or +3 oxidation state.", "title": "Applications" }, { "paragraph_id": 31, "text": "It is a dopant in some types of glass in lasers and other optoelectronic devices. Europium oxide (Eu2O3) is widely used as a red phosphor in television sets and fluorescent lamps, and as an activator for yttrium-based phosphors. Color TV screens contain between 0.5 and 1 g of europium oxide. Whereas trivalent europium gives red phosphors, the luminescence of divalent europium depends strongly on the composition of the host structure. UV to deep red luminescence can be achieved. The two classes of europium-based phosphor (red and blue), combined with the yellow/green terbium phosphors give \"white\" light, the color temperature of which can be varied by altering the proportion or specific composition of the individual phosphors. This phosphor system is typically encountered in helical fluorescent light bulbs. Combining the same three classes is one way to make trichromatic systems in TV and computer screens, but as an additive, it can be particularly effective in improving the intensity of red phosphor. Europium is also used in the manufacture of fluorescent glass, increasing the general efficiency of fluorescent lamps. One of the more common persistent after-glow phosphors besides copper-doped zinc sulfide is europium-doped strontium aluminate. Europium fluorescence is used to interrogate biomolecular interactions in drug-discovery screens. It is also used in the anti-counterfeiting phosphors in euro banknotes.", "title": "Applications" }, { "paragraph_id": 32, "text": "An application that has almost fallen out of use with the introduction of affordable superconducting magnets is the use of europium complexes, such as Eu(fod)3, as shift reagents in NMR spectroscopy. Chiral shift reagents, such as Eu(hfc)3, are still used to determine enantiomeric purity.", "title": "Applications" }, { "paragraph_id": 33, "text": "There are no clear indications that europium is particularly toxic compared to other heavy metals. Europium chloride, nitrate and oxide have been tested for toxicity: europium chloride shows an acute intraperitoneal LD50 toxicity of 550 mg/kg and the acute oral LD50 toxicity is 5000 mg/kg. Europium nitrate shows a slightly higher intraperitoneal LD50 toxicity of 320 mg/kg, while the oral toxicity is above 5000 mg/kg. The metal dust presents a fire and explosion hazard.", "title": "Precautions" } ]
Europium is a chemical element; it has symbol Eu and atomic number 63. Europium is a silvery-white metal of the lanthanide series that reacts readily with air to form a dark oxide coating. It is the most chemically reactive, least dense, and softest of the lanthanide elements. It is soft enough to be cut with a knife. Europium was isolated in 1901 and named after the continent of Europe. Europium usually assumes the oxidation state +3, like other members of the lanthanide series, but compounds having oxidation state +2 are also common. All europium compounds with oxidation state +2 are slightly reducing. Europium has no significant biological role and is relatively non-toxic compared to other heavy metals. Most applications of europium exploit the phosphorescence of europium compounds. Europium is one of the rarest of the rare-earth elements on Earth.
2001-05-17T14:25:10Z
2023-11-17T21:00:50Z
[ "Template:Greenwood&Earnshaw2nd", "Template:Good article", "Template:Main", "Template:Val", "Template:Medium-lived fission products", "Template:Main article", "Template:Reflist", "Template:Cite web", "Template:Commons", "Template:Clear", "Template:Europium compounds", "Template:Infobox europium", "Template:Chem", "Template:Cite report", "Template:Wiktionary", "Template:Periodic table (navbox)", "Template:Authority control", "Template:NUBASE2020", "Template:Chembox", "Template:ISBN", "Template:Cite journal", "Template:Cite book", "Template:Ullmann", "Template:Webarchive" ]
https://en.wikipedia.org/wiki/Europium
9,478
Erbium
Erbium is a chemical element; it has symbol Er and atomic number 68. A silvery-white solid metal when artificially isolated, natural erbium is always found in chemical combination with other elements. It is a lanthanide, a rare-earth element, originally found in the gadolinite mine in Ytterby, Sweden, which is the source of the element's name. Erbium's principal uses involve its pink-colored Er ions, which have optical fluorescent properties particularly useful in certain laser applications. Erbium-doped glasses or crystals can be used as optical amplification media, where Er ions are optically pumped at around 980 or 1480 nm and then radiate light at 1530 nm in stimulated emission. This process results in an unusually mechanically simple laser optical amplifier for signals transmitted by fiber optics. The 1550 nm wavelength is especially important for optical communications because standard single mode optical fibers have minimal loss at this particular wavelength. In addition to optical fiber amplifier-lasers, a large variety of medical applications (i.e. dermatology, dentistry) rely on the erbium ion's 2940 nm emission (see Er:YAG laser) when lit at another wavelength, which is highly absorbed in water in tissues, making its effect very superficial. Such shallow tissue deposition of laser energy is helpful in laser surgery, and for the efficient production of steam which produces enamel ablation by common types of dental laser. A trivalent element, pure erbium metal is malleable (or easily shaped), soft yet stable in air, and does not oxidize as quickly as some other rare-earth metals. Its salts are rose-colored, and the element has characteristic sharp absorption spectra bands in visible light, ultraviolet, and near infrared. Otherwise it looks much like the other rare earths. Its sesquioxide is called erbia. Erbium's properties are to a degree dictated by the kind and amount of impurities present. Erbium does not play any known biological role, but is thought to be able to stimulate metabolism. Erbium is ferromagnetic below 19 K, antiferromagnetic between 19 and 80 K and paramagnetic above 80 K. Erbium can form propeller-shaped atomic clusters Er3N, where the distance between the erbium atoms is 0.35 nm. Those clusters can be isolated by encapsulating them into fullerene molecules, as confirmed by transmission electron microscopy. Like most rare-earth elements, erbium is usually found in the +3 oxidation state. However, it is possible for erbium to also be found in the 0, +1 and +2 oxidation states. Erbium metal retains its luster in dry air, however will tarnish slowly in moist air and burns readily to form erbium(III) oxide: Erbium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form erbium hydroxide: Erbium metal reacts with all the halogens: Erbium dissolves readily in dilute sulfuric acid to form solutions containing hydrated Er(III) ions, which exist as rose red [Er(OH2)9] hydration complexes: Naturally occurring erbium is composed of 6 stable isotopes, Er, Er, Er, Er, Er, and Er, with Er being the most abundant (33.503% natural abundance). 29 radioisotopes have been characterized, with the most stable being Er with a half-life of 9.4 d, Er with a half-life of 49.3 h, Er with a half-life of 28.58 h, Er with a half-life of 10.36 h, and Er with a half-life of 7.516 h. All of the remaining radioactive isotopes have half-lives that are less than 3.5 h, and the majority of these have half-lives that are less than 4 minutes. This element also has 13 meta states, with the most stable being Er with a half-life of 2.269 s. The isotopes of erbium range in atomic weight from 142.9663 u (Er) to 176.9541 u (Er). The primary decay mode before the most abundant stable isotope, Er, is electron capture, and the primary mode after is beta decay. The primary decay products before Er are element 67 (holmium) isotopes, and the primary products after are element 69 (thulium) isotopes. Erbium(III) oxide (also known as erbia) is the only known oxide of erbium, first isolated by Carl Gustaf Mosander in 1843, and first obtained in pure form in 1905 by Georges Urbain and Charles James. It has a cubic structure resembling the bixbyite motif. The Er centers are octahedral. The formation of erbium oxide is accomplished by burning erbium metal. Erbium oxide is insoluble in water and soluble in mineral acids. Erbium(III) fluoride is a pinkish powder that can be produced by reacting erbium(III) nitrate and ammonium fluoride. It can be used to make infrared light-transmitting materials and up-converting luminescent materials. Erbium(III) chloride is a violet compounds that can be formed by first heating erbium(III) oxide and ammonium chloride to produce the ammonium salt of the pentachloride ([NH4]2ErCl5) then heating it in a vacuum at 350-400 °C. It forms crystals of the AlCl3 type, with monoclinic crystals and the point group C2/m. Erbium(III) chloride hexahydrate also forms monoclinic crystals with the point group of P2/n (P2/c) - C2h. In this compound, erbium is octa-coordinated to form [Er(H2O)6Cl2] ions with the isolated Cl completing the structure. Erbium(III) bromide is a violet solid. It is used, like other metal bromide compounds, in water treatment, chemical analysis and for certain crystal growth applications. Erbium(III) iodide is a slightly pink compound that is insoluble in water. It can be prepared by directly reacting erbium with iodine. Organoerbium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric. Erbium (for Ytterby, a village in Sweden) was discovered by Carl Gustaf Mosander in 1843. Mosander was working with a sample of what was thought to be the single metal oxide yttria, derived from the mineral gadolinite. He discovered that the sample contained at least two metal oxides in addition to pure yttria, which he named "erbia" and "terbia" after the village of Ytterby where the gadolinite had been found. Mosander was not certain of the purity of the oxides and later tests confirmed his uncertainty. Not only did the "yttria" contain yttrium, erbium, and terbium; in the ensuing years, chemists, geologists and spectroscopists discovered five additional elements: ytterbium, scandium, thulium, holmium, and gadolinium. Erbia and terbia, however, were confused at this time. A spectroscopist mistakenly switched the names of the two elements during spectroscopy. After 1860, terbia was renamed erbia and after 1877 what had been known as erbia was renamed terbia. Fairly pure Er2O3 was independently isolated in 1905 by Georges Urbain and Charles James. Reasonably pure erbium metal was not produced until 1934 when Wilhelm Klemm and Heinrich Bommer reduced the anhydrous chloride with potassium vapor. It was only in the 1990s that the price for Chinese-derived erbium oxide became low enough for erbium to be considered for use as a colorant in art glass. The concentration of erbium in the Earth crust is about 2.8 mg/kg and in seawater 0.9 ng/L. Erbium is the 44th most abundant element in the Earth's crust at about 3.0–3.8 ppm. Like other rare earths, this element is never found as a free element in nature but is found bound in monazite sand ores. It has historically been very difficult and expensive to separate rare earths from each other in their ores but ion-exchange chromatography methods developed in the late 20th century have greatly brought down the cost of production of all rare-earth metals and their chemical compounds. The principal commercial sources of erbium are from the minerals xenotime and euxenite, and most recently, the ion adsorption clays of southern China; in consequence, China has now become the principal global supplier of this element. In the high-yttrium versions of these ore concentrates, yttrium is about two-thirds of the total by weight, and erbia is about 4–5%. When the concentrate is dissolved in acid, the erbia liberates enough erbium ion to impart a distinct and characteristic pink color to the solution. This color behavior is similar to what Mosander and the other early workers in the lanthanides would have seen in their extracts from the gadolinite minerals of Ytterby. Crushed minerals are attacked by hydrochloric or sulfuric acid that transforms insoluble rare-earth oxides into soluble chlorides or sulfates. The acidic filtrates are partially neutralized with caustic soda (sodium hydroxide) to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3. The solution is treated with magnesium nitrate to produce a crystallized mixture of double salts of rare-earth metals. The salts are separated by ion exchange. In this process, rare-earth ions are sorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. The rare earth ions are then selectively washed out by suitable complexing agent. Erbium metal is obtained from its oxide or salts by heating with calcium at 1450 °C under argon atmosphere. Erbium's everyday uses are varied. It is commonly used as a photographic filter, and because of its resilience it is useful as a metallurgical additive. A large variety of medical applications (i.e. dermatology, dentistry) utilize erbium ion's 2940 nm emission (see Er:YAG laser), which is highly absorbed in water (absorption coefficient about 12000/cm). Such shallow tissue deposition of laser energy is necessary for laser surgery, and the efficient production of steam for laser enamel ablation in dentistry. Erbium-doped optical silica-glass fibers are the active element in erbium-doped fiber amplifiers (EDFAs), which are widely used in optical communications. The same fibers can be used to create fiber lasers. In order to work efficiently, erbium-doped fiber is usually co-doped with glass modifiers/homogenizers, often aluminium or phosphorus. These dopants help prevent clustering of Er ions and transfer the energy more efficiently between excitation light (also known as optical pump) and the signal. Co-doping of optical fiber with Er and Yb is used in high-power Er/Yb fiber lasers. Erbium can also be used in erbium-doped waveguide amplifiers. When added to vanadium as an alloy, erbium lowers hardness and improves workability. An erbium-nickel alloy Er3Ni has an unusually high specific heat capacity at liquid-helium temperatures and is used in cryocoolers; a mixture of 65% Er3Co and 35% Er0.9Yb0.1Ni by volume improves the specific heat capacity even more. Erbium oxide has a pink color, and is sometimes used as a colorant for glass, cubic zirconia and porcelain. The glass is then often used in sunglasses and cheap jewelry. Erbium is used in nuclear technology in neutron-absorbing control rods. or as a burnable poison in nuclear fuel design. Recently, erbium has been used in experiments related to lattice confinement fusion. Erbium does not have a biological role, but erbium salts can stimulate metabolism. Humans consume 1 milligram of erbium a year on average. The highest concentration of erbium in humans is in the bones, but there is also erbium in the human kidneys and liver. Erbium is slightly toxic if ingested, but erbium compounds are not toxic. Metallic erbium in dust form presents a fire and explosion hazard.
[ { "paragraph_id": 0, "text": "Erbium is a chemical element; it has symbol Er and atomic number 68. A silvery-white solid metal when artificially isolated, natural erbium is always found in chemical combination with other elements. It is a lanthanide, a rare-earth element, originally found in the gadolinite mine in Ytterby, Sweden, which is the source of the element's name.", "title": "" }, { "paragraph_id": 1, "text": "Erbium's principal uses involve its pink-colored Er ions, which have optical fluorescent properties particularly useful in certain laser applications. Erbium-doped glasses or crystals can be used as optical amplification media, where Er ions are optically pumped at around 980 or 1480 nm and then radiate light at 1530 nm in stimulated emission. This process results in an unusually mechanically simple laser optical amplifier for signals transmitted by fiber optics. The 1550 nm wavelength is especially important for optical communications because standard single mode optical fibers have minimal loss at this particular wavelength.", "title": "" }, { "paragraph_id": 2, "text": "In addition to optical fiber amplifier-lasers, a large variety of medical applications (i.e. dermatology, dentistry) rely on the erbium ion's 2940 nm emission (see Er:YAG laser) when lit at another wavelength, which is highly absorbed in water in tissues, making its effect very superficial. Such shallow tissue deposition of laser energy is helpful in laser surgery, and for the efficient production of steam which produces enamel ablation by common types of dental laser.", "title": "" }, { "paragraph_id": 3, "text": "A trivalent element, pure erbium metal is malleable (or easily shaped), soft yet stable in air, and does not oxidize as quickly as some other rare-earth metals. Its salts are rose-colored, and the element has characteristic sharp absorption spectra bands in visible light, ultraviolet, and near infrared. Otherwise it looks much like the other rare earths. Its sesquioxide is called erbia. Erbium's properties are to a degree dictated by the kind and amount of impurities present. Erbium does not play any known biological role, but is thought to be able to stimulate metabolism.", "title": "Characteristics" }, { "paragraph_id": 4, "text": "Erbium is ferromagnetic below 19 K, antiferromagnetic between 19 and 80 K and paramagnetic above 80 K.", "title": "Characteristics" }, { "paragraph_id": 5, "text": "Erbium can form propeller-shaped atomic clusters Er3N, where the distance between the erbium atoms is 0.35 nm. Those clusters can be isolated by encapsulating them into fullerene molecules, as confirmed by transmission electron microscopy.", "title": "Characteristics" }, { "paragraph_id": 6, "text": "Like most rare-earth elements, erbium is usually found in the +3 oxidation state. However, it is possible for erbium to also be found in the 0, +1 and +2 oxidation states.", "title": "Characteristics" }, { "paragraph_id": 7, "text": "Erbium metal retains its luster in dry air, however will tarnish slowly in moist air and burns readily to form erbium(III) oxide:", "title": "Characteristics" }, { "paragraph_id": 8, "text": "Erbium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form erbium hydroxide:", "title": "Characteristics" }, { "paragraph_id": 9, "text": "Erbium metal reacts with all the halogens:", "title": "Characteristics" }, { "paragraph_id": 10, "text": "Erbium dissolves readily in dilute sulfuric acid to form solutions containing hydrated Er(III) ions, which exist as rose red [Er(OH2)9] hydration complexes:", "title": "Characteristics" }, { "paragraph_id": 11, "text": "Naturally occurring erbium is composed of 6 stable isotopes, Er, Er, Er, Er, Er, and Er, with Er being the most abundant (33.503% natural abundance). 29 radioisotopes have been characterized, with the most stable being Er with a half-life of 9.4 d, Er with a half-life of 49.3 h, Er with a half-life of 28.58 h, Er with a half-life of 10.36 h, and Er with a half-life of 7.516 h. All of the remaining radioactive isotopes have half-lives that are less than 3.5 h, and the majority of these have half-lives that are less than 4 minutes. This element also has 13 meta states, with the most stable being Er with a half-life of 2.269 s.", "title": "Characteristics" }, { "paragraph_id": 12, "text": "The isotopes of erbium range in atomic weight from 142.9663 u (Er) to 176.9541 u (Er). The primary decay mode before the most abundant stable isotope, Er, is electron capture, and the primary mode after is beta decay. The primary decay products before Er are element 67 (holmium) isotopes, and the primary products after are element 69 (thulium) isotopes.", "title": "Characteristics" }, { "paragraph_id": 13, "text": "Erbium(III) oxide (also known as erbia) is the only known oxide of erbium, first isolated by Carl Gustaf Mosander in 1843, and first obtained in pure form in 1905 by Georges Urbain and Charles James. It has a cubic structure resembling the bixbyite motif. The Er centers are octahedral. The formation of erbium oxide is accomplished by burning erbium metal. Erbium oxide is insoluble in water and soluble in mineral acids.", "title": "Compounds" }, { "paragraph_id": 14, "text": "Erbium(III) fluoride is a pinkish powder that can be produced by reacting erbium(III) nitrate and ammonium fluoride. It can be used to make infrared light-transmitting materials and up-converting luminescent materials. Erbium(III) chloride is a violet compounds that can be formed by first heating erbium(III) oxide and ammonium chloride to produce the ammonium salt of the pentachloride ([NH4]2ErCl5) then heating it in a vacuum at 350-400 °C. It forms crystals of the AlCl3 type, with monoclinic crystals and the point group C2/m. Erbium(III) chloride hexahydrate also forms monoclinic crystals with the point group of P2/n (P2/c) - C2h. In this compound, erbium is octa-coordinated to form [Er(H2O)6Cl2] ions with the isolated Cl completing the structure.", "title": "Compounds" }, { "paragraph_id": 15, "text": "Erbium(III) bromide is a violet solid. It is used, like other metal bromide compounds, in water treatment, chemical analysis and for certain crystal growth applications. Erbium(III) iodide is a slightly pink compound that is insoluble in water. It can be prepared by directly reacting erbium with iodine.", "title": "Compounds" }, { "paragraph_id": 16, "text": "Organoerbium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric.", "title": "Compounds" }, { "paragraph_id": 17, "text": "Erbium (for Ytterby, a village in Sweden) was discovered by Carl Gustaf Mosander in 1843. Mosander was working with a sample of what was thought to be the single metal oxide yttria, derived from the mineral gadolinite. He discovered that the sample contained at least two metal oxides in addition to pure yttria, which he named \"erbia\" and \"terbia\" after the village of Ytterby where the gadolinite had been found. Mosander was not certain of the purity of the oxides and later tests confirmed his uncertainty. Not only did the \"yttria\" contain yttrium, erbium, and terbium; in the ensuing years, chemists, geologists and spectroscopists discovered five additional elements: ytterbium, scandium, thulium, holmium, and gadolinium.", "title": "History" }, { "paragraph_id": 18, "text": "Erbia and terbia, however, were confused at this time. A spectroscopist mistakenly switched the names of the two elements during spectroscopy. After 1860, terbia was renamed erbia and after 1877 what had been known as erbia was renamed terbia. Fairly pure Er2O3 was independently isolated in 1905 by Georges Urbain and Charles James. Reasonably pure erbium metal was not produced until 1934 when Wilhelm Klemm and Heinrich Bommer reduced the anhydrous chloride with potassium vapor. It was only in the 1990s that the price for Chinese-derived erbium oxide became low enough for erbium to be considered for use as a colorant in art glass.", "title": "History" }, { "paragraph_id": 19, "text": "The concentration of erbium in the Earth crust is about 2.8 mg/kg and in seawater 0.9 ng/L. Erbium is the 44th most abundant element in the Earth's crust at about 3.0–3.8 ppm.", "title": "Occurrence" }, { "paragraph_id": 20, "text": "Like other rare earths, this element is never found as a free element in nature but is found bound in monazite sand ores. It has historically been very difficult and expensive to separate rare earths from each other in their ores but ion-exchange chromatography methods developed in the late 20th century have greatly brought down the cost of production of all rare-earth metals and their chemical compounds.", "title": "Occurrence" }, { "paragraph_id": 21, "text": "The principal commercial sources of erbium are from the minerals xenotime and euxenite, and most recently, the ion adsorption clays of southern China; in consequence, China has now become the principal global supplier of this element. In the high-yttrium versions of these ore concentrates, yttrium is about two-thirds of the total by weight, and erbia is about 4–5%. When the concentrate is dissolved in acid, the erbia liberates enough erbium ion to impart a distinct and characteristic pink color to the solution. This color behavior is similar to what Mosander and the other early workers in the lanthanides would have seen in their extracts from the gadolinite minerals of Ytterby.", "title": "Occurrence" }, { "paragraph_id": 22, "text": "Crushed minerals are attacked by hydrochloric or sulfuric acid that transforms insoluble rare-earth oxides into soluble chlorides or sulfates. The acidic filtrates are partially neutralized with caustic soda (sodium hydroxide) to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3. The solution is treated with magnesium nitrate to produce a crystallized mixture of double salts of rare-earth metals. The salts are separated by ion exchange. In this process, rare-earth ions are sorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. The rare earth ions are then selectively washed out by suitable complexing agent. Erbium metal is obtained from its oxide or salts by heating with calcium at 1450 °C under argon atmosphere.", "title": "Production" }, { "paragraph_id": 23, "text": "Erbium's everyday uses are varied. It is commonly used as a photographic filter, and because of its resilience it is useful as a metallurgical additive.", "title": "Applications" }, { "paragraph_id": 24, "text": "A large variety of medical applications (i.e. dermatology, dentistry) utilize erbium ion's 2940 nm emission (see Er:YAG laser), which is highly absorbed in water (absorption coefficient about 12000/cm). Such shallow tissue deposition of laser energy is necessary for laser surgery, and the efficient production of steam for laser enamel ablation in dentistry.", "title": "Applications" }, { "paragraph_id": 25, "text": "Erbium-doped optical silica-glass fibers are the active element in erbium-doped fiber amplifiers (EDFAs), which are widely used in optical communications. The same fibers can be used to create fiber lasers. In order to work efficiently, erbium-doped fiber is usually co-doped with glass modifiers/homogenizers, often aluminium or phosphorus. These dopants help prevent clustering of Er ions and transfer the energy more efficiently between excitation light (also known as optical pump) and the signal. Co-doping of optical fiber with Er and Yb is used in high-power Er/Yb fiber lasers. Erbium can also be used in erbium-doped waveguide amplifiers.", "title": "Applications" }, { "paragraph_id": 26, "text": "When added to vanadium as an alloy, erbium lowers hardness and improves workability. An erbium-nickel alloy Er3Ni has an unusually high specific heat capacity at liquid-helium temperatures and is used in cryocoolers; a mixture of 65% Er3Co and 35% Er0.9Yb0.1Ni by volume improves the specific heat capacity even more.", "title": "Applications" }, { "paragraph_id": 27, "text": "Erbium oxide has a pink color, and is sometimes used as a colorant for glass, cubic zirconia and porcelain. The glass is then often used in sunglasses and cheap jewelry.", "title": "Applications" }, { "paragraph_id": 28, "text": "Erbium is used in nuclear technology in neutron-absorbing control rods. or as a burnable poison in nuclear fuel design. Recently, erbium has been used in experiments related to lattice confinement fusion.", "title": "Applications" }, { "paragraph_id": 29, "text": "Erbium does not have a biological role, but erbium salts can stimulate metabolism. Humans consume 1 milligram of erbium a year on average. The highest concentration of erbium in humans is in the bones, but there is also erbium in the human kidneys and liver. Erbium is slightly toxic if ingested, but erbium compounds are not toxic. Metallic erbium in dust form presents a fire and explosion hazard.", "title": "Biological role and precautions" } ]
Erbium is a chemical element; it has symbol Er and atomic number 68. A silvery-white solid metal when artificially isolated, natural erbium is always found in chemical combination with other elements. It is a lanthanide, a rare-earth element, originally found in the gadolinite mine in Ytterby, Sweden, which is the source of the element's name. Erbium's principal uses involve its pink-colored Er3+ ions, which have optical fluorescent properties particularly useful in certain laser applications. Erbium-doped glasses or crystals can be used as optical amplification media, where Er3+ ions are optically pumped at around 980 or 1480 nm and then radiate light at 1530 nm in stimulated emission. This process results in an unusually mechanically simple laser optical amplifier for signals transmitted by fiber optics. The 1550 nm wavelength is especially important for optical communications because standard single mode optical fibers have minimal loss at this particular wavelength. In addition to optical fiber amplifier-lasers, a large variety of medical applications (i.e. dermatology, dentistry) rely on the erbium ion's 2940 nm emission (see Er:YAG laser) when lit at another wavelength, which is highly absorbed in water in tissues, making its effect very superficial. Such shallow tissue deposition of laser energy is helpful in laser surgery, and for the efficient production of steam which produces enamel ablation by common types of dental laser.
2001-05-17T14:38:12Z
2023-11-17T21:02:47Z
[ "Template:Chem", "Template:Rp", "Template:Cite web", "Template:Cite book", "Template:Citation", "Template:ISBN", "Template:Wiktionary", "Template:Main article", "Template:Chem2", "Template:See also", "Template:Reflist", "Template:Cite journal", "Template:Commons", "Template:Clear", "Template:Val", "Template:Periodic table (navbox)", "Template:Erbium compounds", "Template:Authority control", "Template:Infobox erbium", "Template:Main", "Template:SimpleNuclide" ]
https://en.wikipedia.org/wiki/Erbium
9,479
Einsteinium
Einsteinium is a synthetic chemical element; it has symbol Es and atomic number 99. Einsteinium is a member of the actinide series and it is the seventh transuranium element. It was named in honor of Albert Einstein. Einsteinium was discovered as a component of the debris of the first hydrogen bomb explosion in 1952. Its most common isotope, einsteinium-253 (half-life 20.47 days), is produced artificially from decay of californium-253 in a few dedicated high-power nuclear reactors with a total yield on the order of one milligram per year. The reactor synthesis is followed by a complex process of separating einsteinium-253 from other actinides and products of their decay. Other isotopes are synthesized in various laboratories, but in much smaller amounts, by bombarding heavy actinide elements with light ions. Due to the small amounts of produced einsteinium and the short half-life of its most common isotope, there are no practical applications for it except basic scientific research. In particular, einsteinium was used to synthesize, for the first time, 17 atoms of the new element mendelevium in 1955. Einsteinium is a soft, silvery, paramagnetic metal. Its chemistry is typical of the late actinides, with a preponderance of the +3 oxidation state; the +2 oxidation state is also accessible, especially in solids. The high radioactivity of einsteinium-253 produces a visible glow and rapidly damages its crystalline metal lattice, with released heat of about 1000 watts per gram. Difficulty in studying its properties is due to einsteinium-253's decay to berkelium-249 and then californium-249 at a rate of about 3% per day. The isotope of einsteinium with the longest half-life, einsteinium-252 (half-life 471.7 days) would be more suitable for investigation of physical properties, but it has proven far more difficult to produce and is available only in minute quantities, and not in bulk. Einsteinium is the element with the highest atomic number which has been observed in macroscopic quantities in its pure form as einsteinium-253. Like all synthetic transuranium elements, isotopes of einsteinium are very radioactive and are considered highly dangerous to health on ingestion. Einsteinium was first identified in December 1952 by Albert Ghiorso and co-workers at the University of California, Berkeley in collaboration with the Argonne and Los Alamos National Laboratories, in the fallout from the Ivy Mike nuclear test. The test was carried out on November 1, 1952, at Enewetak Atoll in the Pacific Ocean and was the first successful test of a thermonuclear weapon. Initial examination of the debris from the explosion had shown the production of a new isotope of plutonium, 94Pu, which could only have formed by the absorption of six neutrons by a uranium-238 nucleus followed by two beta decays. At the time, the multiple neutron absorption was thought to be an extremely rare process, but the identification of 94Pu indicated that still more neutrons could have been captured by the uranium nuclei, thereby producing new elements heavier than californium. Ghiorso and co-workers analyzed filter papers which had been flown through the explosion cloud on airplanes (the same sampling technique that had been used to discover 94Pu). Larger amounts of radioactive material were later isolated from coral debris of the atoll, which were delivered to the U.S. The separation of suspected new elements was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH ≈ 3.5), using ion exchange at elevated temperatures; fewer than 200 atoms of einsteinium were recovered in the end. Nevertheless, element 99 (einsteinium), namely its Es isotope, could be detected via its characteristic high-energy alpha decay at 6.6 MeV. It was produced by the capture of 15 neutrons by uranium-238 nuclei followed by seven beta-decays, and had a half-life of 20.5 days. Such multiple neutron absorption was made possible by the high neutron flux density during the detonation, so that newly generated heavy isotopes had plenty of available neutrons to absorb before they could disintegrate into lighter elements. Neutron capture initially raised the mass number without changing the atomic number of the nuclide, and the concomitant beta-decays resulted in a gradual increase in the atomic number: Some U atoms, however, could absorb two additional neutrons (for a total of 17), resulting in Es, as well as in the Fm isotope of another new element, fermium. The discovery of the new elements and the associated new data on multiple neutron capture were initially kept secret on the orders of the U.S. military until 1955 due to Cold War tensions and competition with Soviet Union in nuclear technologies. However, the rapid capture of so many neutrons would provide needed direct experimental confirmation of the so-called r-process multiple neutron absorption needed to explain the cosmic nucleosynthesis (production) of certain heavy chemical elements (heavier than nickel) in supernova explosions, before beta decay. Such a process is needed to explain the existence of many stable elements in the universe. Meanwhile, isotopes of element 99 (as well as of new element 100, fermium) were produced in the Berkeley and Argonne laboratories, in a nuclear reaction between nitrogen-14 and uranium-238, and later by intense neutron irradiation of plutonium or californium: These results were published in several articles in 1954 with the disclaimer that these were not the first studies that had been carried out on the elements. The Berkeley team also reported some results on the chemical properties of einsteinium and fermium. The Ivy Mike results were declassified and published in 1955. In their discovery of the elements 99 and 100, the American teams had competed with a group at the Nobel Institute for Physics, Stockholm, Sweden. In late 1953 – early 1954, the Swedish group succeeded in the synthesis of light isotopes of element 100, in particular Fm, by bombarding uranium with oxygen nuclei. These results were also published in 1954. Nevertheless, the priority of the Berkeley team was generally recognized, as its publications preceded the Swedish article, and they were based on the previously undisclosed results of the 1952 thermonuclear explosion; thus the Berkeley team was given the privilege to name the new elements. As the effort which had led to the design of Ivy Mike was codenamed Project PANDA, element 99 had been jokingly nicknamed "Pandemonium" but the official names suggested by the Berkeley group derived from two prominent scientists, Albert Einstein and Enrico Fermi: "We suggest for the name for the element with the atomic number 99, einsteinium (symbol E) after Albert Einstein and for the name for the element with atomic number 100, fermium (symbol Fm), after Enrico Fermi." Both Einstein and Fermi died between the time the names were originally proposed and when they were announced. The discovery of these new elements was announced by Albert Ghiorso at the first Geneva Atomic Conference held on 8–20 August 1955. The symbol for einsteinium was first given as "E" and later changed to "Es" by IUPAC. Einsteinium is a synthetic, silver, radioactive metal. In the periodic table, it is located to the right of the actinide californium, to the left of the actinide fermium and below the lanthanide holmium with which it shares many similarities in physical and chemical properties. Its density of 8.84 g/cm is lower than that of californium (15.1 g/cm) and is nearly the same as that of holmium (8.79 g/cm), despite atomic einsteinium being much heavier than holmium. The melting point of einsteinium (860 °C) is also relatively low – below californium (900 °C), fermium (1527 °C) and holmium (1461 °C). Einsteinium is a soft metal, with the bulk modulus of only 15 GPa, which value is one of the lowest among non-alkali metals. Contrary to the lighter actinides californium, berkelium, curium and americium, which crystallize in a double hexagonal structure at ambient conditions, einsteinium is believed to have a face-centered cubic (fcc) symmetry with the space group Fm3m and the lattice constant a = 575 pm. However, there is a report of room-temperature hexagonal einsteinium metal with a = 398 pm and c = 650 pm, which converted to the fcc phase upon heating to 300 °C. The self-damage induced by the radioactivity of einsteinium is so strong that it rapidly destroys the crystal lattice, and the energy release during this process, 1000 watts per gram of Es, induces a visible glow. These processes may contribute to the relatively low density and melting point of einsteinium. Further, owing to the small size of the available samples, the melting point of einsteinium was often deduced by observing the sample being heated inside an electron microscope. Thus, the surface effects in small samples could reduce the melting point value. The metal is trivalent and has a noticeably high volatility. In order to reduce the self-radiation damage, most measurements of solid einsteinium and its compounds are performed right after thermal annealing. Also, some compounds are studied under the atmosphere of the reductant gas, for example H2O+HCl for EsOCl so that the sample is partly regrown during its decomposition. Apart from the self-destruction of solid einsteinium and its compounds, other intrinsic difficulties in studying this element include scarcity – the most common Es isotope is available only once or twice a year in sub-milligram amounts – and self-contamination due to rapid conversion of einsteinium to berkelium and then to californium at a rate of about 3.3% per day: Thus, most einsteinium samples are contaminated, and their intrinsic properties are often deduced by extrapolating back experimental data accumulated over time. Other experimental techniques to circumvent the contamination problem include selective optical excitation of einsteinium ions by a tunable laser, such as in studying its luminescence properties. Magnetic properties have been studied for einsteinium metal, its oxide and fluoride. All three materials showed Curie–Weiss paramagnetic behavior from liquid helium to room temperature. The effective magnetic moments were deduced as 10.4±0.3 μB for Es2O3 and 11.4±0.3 μB for the EsF3, which are the highest values among actinides, and the corresponding Curie temperatures are 53 and 37 K. Like all actinides, einsteinium is rather reactive. Its trivalent oxidation state is most stable in solids and aqueous solution where it induces a pale pink color. The existence of divalent einsteinium is firmly established, especially in the solid phase; such +2 state is not observed in many other actinides, including protactinium, uranium, neptunium, plutonium, curium and berkelium. Einsteinium(II) compounds can be obtained, for example, by reducing einsteinium(III) with samarium(II) chloride. The oxidation state +4 was postulated from vapor studies and is as yet uncertain. Nineteen isotopes and three nuclear isomers are known for einsteinium, with mass numbers ranging from 240 to 257. All are radioactive and the most stable nuclide, Es, has a half-life of 471.7 days. The next most stable isotopes are Es (half-life 275.7 days), Es (39.8 days), and Es (20.47 days). All of the remaining isotopes have half-lives shorter than 40 hours, most shorter than 30 minutes. Of the three nuclear isomers, the most stable is Es with a half-life of 39.3 hours. Einsteinium has a high rate of nuclear fission that results in a low critical mass for a sustained nuclear chain reaction. This mass is 9.89 kilograms for a bare sphere of Es isotope, and can be lowered to 2.9 kilograms by adding a 30-centimeter-thick steel neutron reflector, or even to 2.26 kilograms with a 20-cm-thick reflector made of water. However, even this small critical mass greatly exceeds the total amount of einsteinium isolated thus far, especially of the rare Es isotope. Because of the short half-life of all isotopes of einsteinium, any primordial einsteinium—that is, einsteinium that could have been present on the Earth at its formation—has long since decayed. Synthesis of einsteinium from naturally-occurring actinides uranium and thorium in the Earth's crust requires multiple neutron capture, which is an extremely unlikely event. Therefore, all terrestrial einsteinium is produced in scientific laboratories, high-power nuclear reactors, or in nuclear weapons tests, and exists only within a few years from the time of the synthesis. The transuranic elements from americium to fermium, including einsteinium, were once created in the natural nuclear fission reactor at Oklo, but no longer. Einsteinium was theoretically observed in the spectrum of Przybylski's Star. However, the lead author of the studies finding einsteinium and other short-lived actinides in Przybylski's Star, Vera F. Gopka, admitted that "the position of lines of the radioactive elements under search were simply visualized in synthetic spectrum as vertical markers because there are not any atomic data for these lines except for their wavelengths (Sansonetti et al. 2004), enabling one to calculate their profiles with more or less real intensities." The signature spectra of einsteinium's isotopes have since been comprehensively analyzed experimentally (in 2021), though there is no published research confirming whether the theorized einsteinium signatures proposed to be found in the star's spectrum match the lab-determined results. Einsteinium is produced in minute quantities by bombarding lighter actinides with neutrons in dedicated high-flux nuclear reactors. The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, U.S., and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium (Z > 96) elements. These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not widely reported. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium (Bk) and einsteinium and picogram quantities of fermium. The first microscopic sample of Es sample weighing about 10 nanograms was prepared in 1961 at HFIR. A special magnetic balance was designed to estimate its weight. Larger batches were produced later starting from several kilograms of plutonium with the einsteinium yields (mostly Es) of 0.48 milligrams in 1967–1970, 3.2 milligrams in 1971–1973, followed by steady production of about 3 milligrams per year between 1974 and 1978. These quantities however refer to the integral amount in the target right after irradiation. Subsequent separation procedures reduced the amount of isotopically pure einsteinium roughly tenfold. Heavy neutron irradiation of plutonium results in four major isotopes of einsteinium: Es (α-emitter with half-life of 20.47 days and with a spontaneous fission half-life of 7×10 years); Es (β-emitter with half-life of 39.3 hours), Es (α-emitter with half-life of about 276 days) and Es (β-emitter with half-life of 39.8 days). An alternative route involves bombardment of uranium-238 with high-intensity nitrogen or oxygen ion beams. Einsteinium-247 (half-life 4.55 minutes) was produced by irradiating americium-241 with carbon or uranium-238 with nitrogen ions. The latter reaction was first realized in 1967 in Dubna, Russia, and the involved scientists were awarded the Lenin Komsomol Prize. The isotope Es was produced by irradiating Cf with deuterium ions. It mainly decays by emission of electrons to Cf with a half-life of 25±5 minutes, but also releases α-particles of 6.87 MeV energy, with the ratio of electrons to α-particles of about 400. The heavier isotopes Es, Es, Es and Es were obtained by bombarding Bk with α-particles. One to four neutrons are liberated in this process making possible the formation of four different isotopes in one reaction. Einsteinium-253 was produced by irradiating a 0.1–0.2 milligram Cf target with a thermal neutron flux of (2–5)×10 neutrons·cm·s for 500–900 hours: In 2020, scientists at the Oak Ridge National Laboratory were able to create 233 nanograms of Es, a new world record. This allowed some chemical properties of the element to be studied for the first time. The analysis of the debris at the 10-megaton Ivy Mike nuclear test was a part of long-term project. One of the goals of which was studying the efficiency of production of transuranium elements in high-power nuclear explosions. The motivation for these experiments was that synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful man-made neutron sources, providing densities of the order 10 neutrons/cm within a microsecond, or about 10 neutrons/(cm·s). In comparison, the flux of the HFIR reactor is 5×10 neutrons/(cm·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the mainland U.S. The laboratory was receiving samples for analysis as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, none of these were found even after a series of megaton explosions conducted between 1954 and 1956 at the atoll. The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions conducted in confined space might result in improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge, but they were less successful in terms of yield and was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Product isolation was problematic as the explosions were spreading debris through melting and vaporizing the surrounding rocks at depths of 300–600 meters. Drilling to such depths to extract the products was both slow and inefficient in terms of collected volumes. Among the nine underground tests that were carried between 1962 and 1969, the last one was the most powerful and had the highest yield of transuranium elements. Milligrams of einsteinium that would normally take a year of irradiation in a high-power reactor, were produced within a microsecond. However, the major practical problem of the entire proposal was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only about 4×10 of the total amount, and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only about 1×10 of the total charge. The amount of transuranium elements in this 500-kg batch was only 30 times higher than in a 0.4 kg rock picked up 7 days after the test which demonstrated the highly non-linear dependence of the transuranium elements yield on the amount of retrieved radioactive rock. Shafts were drilled at the site before the test in order to accelerate sample collection after explosion, so that explosion would expel radioactive material from the epicenter through the shafts and to collecting volumes near the surface. This method was tried in two tests and instantly provided hundreds kilograms of material, but with actinide concentration 3 times lower than in samples obtained after drilling. Whereas such method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides. Although no new elements (apart from einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranium elements were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories. Separation procedure of einsteinium depends on the synthesis method. In the case of light-ion bombardment inside a cyclotron, the heavy ion target is attached to a thin foil, and the generated einsteinium is simply washed off the foil after the irradiation. However, the produced amounts in such experiments are relatively low. The yields are much higher for reactor irradiation, but there, the product is a mixture of various actinide isotopes, as well as lanthanides produced in the nuclear fission decays. In this case, isolation of einsteinium is a tedious procedure which involves several repeating steps of cation exchange, at elevated temperature and pressure, and chromatography. Separation from berkelium is important, because the most common einsteinium isotope produced in nuclear reactors, Es, decays with a half-life of only 20 days to Bk, which is fast on the timescale of most experiments. Such separation relies on the fact that berkelium easily oxidizes to the solid +4 state and precipitates, whereas other actinides, including einsteinium, remain in their +3 state in solutions. Separation of trivalent actinides from lanthanide fission products can be done by a cation-exchange resin column using a 90% water/10% ethanol solution saturated with hydrochloric acid (HCl) as eluant. It is usually followed by anion-exchange chromatography using 6 molar HCl as eluant. A cation-exchange resin column (Dowex-50 exchange column) treated with ammonium salts is then used to separate fractions containing elements 99, 100 and 101. These elements can be then identified simply based on their elution position/time, using α-hydroxyisobutyrate solution (α-HIB), for example, as eluant. Separation of the 3+ actinides can also be achieved by solvent extraction chromatography, using bis-(2-ethylhexyl) phosphoric acid (abbreviated as HDEHP) as the stationary organic phase, and nitric acid as the mobile aqueous phase. The actinide elution sequence is reversed from that of the cation-exchange resin column. The einsteinium separated by this method has the advantage to be free of organic complexing agent, as compared to the separation using a resin column. Einsteinium is highly reactive and therefore strong reducing agents are required to obtain the pure metal from its compounds. This can be achieved by reduction of einsteinium(III) fluoride with metallic lithium: However, owing to its low melting point and high rate of self-radiation damage, einsteinium has high vapor pressure, which is higher than that of lithium fluoride. This makes this reduction reaction rather inefficient. It was tried in the early preparation attempts and quickly abandoned in favor of reduction of einsteinium(III) oxide with lanthanum metal: Einsteinium(III) oxide (Es2O3) was obtained by burning einsteinium(III) nitrate. It forms colorless cubic crystals, which were first characterized from microgram samples sized about 30 nanometers. Two other phases, monoclinic and hexagonal, are known for this oxide. The formation of a certain Es2O3 phase depends on the preparation technique and sample history, and there is no clear phase diagram. Interconversions between the three phases can occur spontaneously, as a result of self-irradiation or self-heating. The hexagonal phase is isotypic with lanthanum oxide where the Es ion is surrounded by a 6-coordinated group of O ions. Einsteinium halides are known for the oxidation states +2 and +3. The most stable state is +3 for all halides from fluoride to iodide. Einsteinium(III) fluoride (EsF3) can be precipitated from einsteinium(III) chloride solutions upon reaction with fluoride ions. An alternative preparation procedure is to exposure einsteinium(III) oxide to chlorine trifluoride (ClF3) or F2 gas at a pressure of 1–2 atmospheres and a temperature between 300 and 400 °C. The EsF3 crystal structure is hexagonal, as in californium(III) fluoride (CfF3) where the Es ions are 8-fold coordinated by fluorine ions in a bicapped trigonal prism arrangement. Einsteinium(III) chloride (EsCl3) can be prepared by annealing einsteinium(III) oxide in the atmosphere of dry hydrogen chloride vapors at about 500 °C for some 20 minutes. It crystallizes upon cooling at about 425 °C into an orange solid with a hexagonal structure of UCl3 type, where einsteinium atoms are 9-fold coordinated by chlorine atoms in a tricapped trigonal prism geometry. Einsteinium(III) bromide (EsBr3) is a pale-yellow solid with a monoclinic structure of AlCl3 type, where the einsteinium atoms are octahedrally coordinated by bromine (coordination number 6). The divalent compounds of einsteinium are obtained by reducing the trivalent halides with hydrogen: Einsteinium(II) chloride (EsCl2), einsteinium(II) bromide (EsBr2), and einsteinium(II) iodide (EsI2) have been produced and characterized by optical absorption, with no structural information available yet. Known oxyhalides of einsteinium include EsOCl, EsOBr and EsOI. These salts are synthesized by treating a trihalide with a vapor mixture of water and the corresponding hydrogen halide: for example, EsCl3 + H2O/HCl to obtain EsOCl. The high radioactivity of einsteinium has a potential use in radiation therapy, and organometallic complexes have been synthesized in order to deliver einsteinium atoms to an appropriate organ in the body. Experiments have been performed on injecting einsteinium citrate (as well as fermium compounds) to dogs. Einsteinium(III) was also incorporated into beta-diketone chelate complexes, since analogous complexes with lanthanides previously showed strongest UV-excited luminescence among metallorganic compounds. When preparing einsteinium complexes, the Es ions were 1000 times diluted with Gd ions. This allowed reducing the radiation damage so that the compounds did not disintegrate during the period of 20 minutes required for the measurements. The resulting luminescence from Es was much too weak to be detected. This was explained by the unfavorable relative energies of the individual constituents of the compound that hindered efficient energy transfer from the chelate matrix to Es ions. Similar conclusion was drawn for other actinides americium, berkelium and fermium. Luminescence of Es ions was however observed in inorganic hydrochloric acid solutions as well as in organic solution with di(2-ethylhexyl)orthophosphoric acid. It shows a broad peak at about 1064 nanometers (half-width about 100 nm) which can be resonantly excited by green light (ca. 495 nm wavelength). The luminescence has a lifetime of several microseconds and the quantum yield below 0.1%. The relatively high, compared to lanthanides, non-radiative decay rates in Es were associated with the stronger interaction of f-electrons with the inner Es electrons. There is almost no use for any isotope of einsteinium outside basic scientific research aiming at production of higher transuranium elements and superheavy elements. In 1955, mendelevium was synthesized by irradiating a target consisting of about 10 atoms of Es in the 60-inch cyclotron at Berkeley Laboratory. The resulting Es(α,n)Md reaction yielded 17 atoms of the new element with the atomic number of 101. The rare isotope Es is favored for production of superheavy elements because of its large mass, relatively long half-life of 270 days, and availability in significant amounts of several micrograms. Hence Es was used as a target in the attempted synthesis of ununennium (element 119) in 1985 by bombarding it with calcium-48 ions at the superHILAC linear particle accelerator at Berkeley, California. No atoms were identified, setting an upper limit for the cross section of this reaction at 300 nanobarns. Es was used as the calibration marker in the chemical analysis spectrometer ("alpha-scattering surface analyzer") of the Surveyor 5 lunar probe. The large mass of this isotope reduced the spectral overlap between signals from the marker and the studied lighter elements of the lunar surface. Most of the available einsteinium toxicity data, is from research on animals. Upon ingestion by rats, only ~0.01% of it ends in the bloodstream. From there, about 65% goes to the bones, where it would remain for ~50 years if not for its radioactive decay, not to speak of the 3-year maximum lifespan of rats, 25% to the lungs (biological half-life ~20 years, though this is again rendered irrelevant by the short half-life of einsteinium), 0.035% to the testicles or 0.01% to the ovaries – where einsteinium stays indefinitely. About 10% of the ingested amount is excreted. The distribution of einsteinium over bone surfaces is uniform and is similar to that of plutonium.
[ { "paragraph_id": 0, "text": "Einsteinium is a synthetic chemical element; it has symbol Es and atomic number 99. Einsteinium is a member of the actinide series and it is the seventh transuranium element. It was named in honor of Albert Einstein.", "title": "" }, { "paragraph_id": 1, "text": "Einsteinium was discovered as a component of the debris of the first hydrogen bomb explosion in 1952. Its most common isotope, einsteinium-253 (half-life 20.47 days), is produced artificially from decay of californium-253 in a few dedicated high-power nuclear reactors with a total yield on the order of one milligram per year. The reactor synthesis is followed by a complex process of separating einsteinium-253 from other actinides and products of their decay. Other isotopes are synthesized in various laboratories, but in much smaller amounts, by bombarding heavy actinide elements with light ions. Due to the small amounts of produced einsteinium and the short half-life of its most common isotope, there are no practical applications for it except basic scientific research. In particular, einsteinium was used to synthesize, for the first time, 17 atoms of the new element mendelevium in 1955.", "title": "" }, { "paragraph_id": 2, "text": "Einsteinium is a soft, silvery, paramagnetic metal. Its chemistry is typical of the late actinides, with a preponderance of the +3 oxidation state; the +2 oxidation state is also accessible, especially in solids. The high radioactivity of einsteinium-253 produces a visible glow and rapidly damages its crystalline metal lattice, with released heat of about 1000 watts per gram. Difficulty in studying its properties is due to einsteinium-253's decay to berkelium-249 and then californium-249 at a rate of about 3% per day. The isotope of einsteinium with the longest half-life, einsteinium-252 (half-life 471.7 days) would be more suitable for investigation of physical properties, but it has proven far more difficult to produce and is available only in minute quantities, and not in bulk. Einsteinium is the element with the highest atomic number which has been observed in macroscopic quantities in its pure form as einsteinium-253.", "title": "" }, { "paragraph_id": 3, "text": "Like all synthetic transuranium elements, isotopes of einsteinium are very radioactive and are considered highly dangerous to health on ingestion.", "title": "" }, { "paragraph_id": 4, "text": "Einsteinium was first identified in December 1952 by Albert Ghiorso and co-workers at the University of California, Berkeley in collaboration with the Argonne and Los Alamos National Laboratories, in the fallout from the Ivy Mike nuclear test. The test was carried out on November 1, 1952, at Enewetak Atoll in the Pacific Ocean and was the first successful test of a thermonuclear weapon. Initial examination of the debris from the explosion had shown the production of a new isotope of plutonium, 94Pu, which could only have formed by the absorption of six neutrons by a uranium-238 nucleus followed by two beta decays.", "title": "History" }, { "paragraph_id": 5, "text": "At the time, the multiple neutron absorption was thought to be an extremely rare process, but the identification of 94Pu indicated that still more neutrons could have been captured by the uranium nuclei, thereby producing new elements heavier than californium.", "title": "History" }, { "paragraph_id": 6, "text": "Ghiorso and co-workers analyzed filter papers which had been flown through the explosion cloud on airplanes (the same sampling technique that had been used to discover 94Pu). Larger amounts of radioactive material were later isolated from coral debris of the atoll, which were delivered to the U.S. The separation of suspected new elements was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH ≈ 3.5), using ion exchange at elevated temperatures; fewer than 200 atoms of einsteinium were recovered in the end. Nevertheless, element 99 (einsteinium), namely its Es isotope, could be detected via its characteristic high-energy alpha decay at 6.6 MeV. It was produced by the capture of 15 neutrons by uranium-238 nuclei followed by seven beta-decays, and had a half-life of 20.5 days. Such multiple neutron absorption was made possible by the high neutron flux density during the detonation, so that newly generated heavy isotopes had plenty of available neutrons to absorb before they could disintegrate into lighter elements. Neutron capture initially raised the mass number without changing the atomic number of the nuclide, and the concomitant beta-decays resulted in a gradual increase in the atomic number:", "title": "History" }, { "paragraph_id": 7, "text": "Some U atoms, however, could absorb two additional neutrons (for a total of 17), resulting in Es, as well as in the Fm isotope of another new element, fermium. The discovery of the new elements and the associated new data on multiple neutron capture were initially kept secret on the orders of the U.S. military until 1955 due to Cold War tensions and competition with Soviet Union in nuclear technologies. However, the rapid capture of so many neutrons would provide needed direct experimental confirmation of the so-called r-process multiple neutron absorption needed to explain the cosmic nucleosynthesis (production) of certain heavy chemical elements (heavier than nickel) in supernova explosions, before beta decay. Such a process is needed to explain the existence of many stable elements in the universe.", "title": "History" }, { "paragraph_id": 8, "text": "Meanwhile, isotopes of element 99 (as well as of new element 100, fermium) were produced in the Berkeley and Argonne laboratories, in a nuclear reaction between nitrogen-14 and uranium-238, and later by intense neutron irradiation of plutonium or californium:", "title": "History" }, { "paragraph_id": 9, "text": "These results were published in several articles in 1954 with the disclaimer that these were not the first studies that had been carried out on the elements. The Berkeley team also reported some results on the chemical properties of einsteinium and fermium. The Ivy Mike results were declassified and published in 1955.", "title": "History" }, { "paragraph_id": 10, "text": "In their discovery of the elements 99 and 100, the American teams had competed with a group at the Nobel Institute for Physics, Stockholm, Sweden. In late 1953 – early 1954, the Swedish group succeeded in the synthesis of light isotopes of element 100, in particular Fm, by bombarding uranium with oxygen nuclei. These results were also published in 1954. Nevertheless, the priority of the Berkeley team was generally recognized, as its publications preceded the Swedish article, and they were based on the previously undisclosed results of the 1952 thermonuclear explosion; thus the Berkeley team was given the privilege to name the new elements. As the effort which had led to the design of Ivy Mike was codenamed Project PANDA, element 99 had been jokingly nicknamed \"Pandemonium\" but the official names suggested by the Berkeley group derived from two prominent scientists, Albert Einstein and Enrico Fermi: \"We suggest for the name for the element with the atomic number 99, einsteinium (symbol E) after Albert Einstein and for the name for the element with atomic number 100, fermium (symbol Fm), after Enrico Fermi.\" Both Einstein and Fermi died between the time the names were originally proposed and when they were announced. The discovery of these new elements was announced by Albert Ghiorso at the first Geneva Atomic Conference held on 8–20 August 1955. The symbol for einsteinium was first given as \"E\" and later changed to \"Es\" by IUPAC.", "title": "History" }, { "paragraph_id": 11, "text": "Einsteinium is a synthetic, silver, radioactive metal. In the periodic table, it is located to the right of the actinide californium, to the left of the actinide fermium and below the lanthanide holmium with which it shares many similarities in physical and chemical properties. Its density of 8.84 g/cm is lower than that of californium (15.1 g/cm) and is nearly the same as that of holmium (8.79 g/cm), despite atomic einsteinium being much heavier than holmium. The melting point of einsteinium (860 °C) is also relatively low – below californium (900 °C), fermium (1527 °C) and holmium (1461 °C). Einsteinium is a soft metal, with the bulk modulus of only 15 GPa, which value is one of the lowest among non-alkali metals.", "title": "Characteristics" }, { "paragraph_id": 12, "text": "Contrary to the lighter actinides californium, berkelium, curium and americium, which crystallize in a double hexagonal structure at ambient conditions, einsteinium is believed to have a face-centered cubic (fcc) symmetry with the space group Fm3m and the lattice constant a = 575 pm. However, there is a report of room-temperature hexagonal einsteinium metal with a = 398 pm and c = 650 pm, which converted to the fcc phase upon heating to 300 °C.", "title": "Characteristics" }, { "paragraph_id": 13, "text": "The self-damage induced by the radioactivity of einsteinium is so strong that it rapidly destroys the crystal lattice, and the energy release during this process, 1000 watts per gram of Es, induces a visible glow. These processes may contribute to the relatively low density and melting point of einsteinium. Further, owing to the small size of the available samples, the melting point of einsteinium was often deduced by observing the sample being heated inside an electron microscope. Thus, the surface effects in small samples could reduce the melting point value.", "title": "Characteristics" }, { "paragraph_id": 14, "text": "The metal is trivalent and has a noticeably high volatility. In order to reduce the self-radiation damage, most measurements of solid einsteinium and its compounds are performed right after thermal annealing. Also, some compounds are studied under the atmosphere of the reductant gas, for example H2O+HCl for EsOCl so that the sample is partly regrown during its decomposition.", "title": "Characteristics" }, { "paragraph_id": 15, "text": "Apart from the self-destruction of solid einsteinium and its compounds, other intrinsic difficulties in studying this element include scarcity – the most common Es isotope is available only once or twice a year in sub-milligram amounts – and self-contamination due to rapid conversion of einsteinium to berkelium and then to californium at a rate of about 3.3% per day:", "title": "Characteristics" }, { "paragraph_id": 16, "text": "Thus, most einsteinium samples are contaminated, and their intrinsic properties are often deduced by extrapolating back experimental data accumulated over time. Other experimental techniques to circumvent the contamination problem include selective optical excitation of einsteinium ions by a tunable laser, such as in studying its luminescence properties.", "title": "Characteristics" }, { "paragraph_id": 17, "text": "Magnetic properties have been studied for einsteinium metal, its oxide and fluoride. All three materials showed Curie–Weiss paramagnetic behavior from liquid helium to room temperature. The effective magnetic moments were deduced as 10.4±0.3 μB for Es2O3 and 11.4±0.3 μB for the EsF3, which are the highest values among actinides, and the corresponding Curie temperatures are 53 and 37 K.", "title": "Characteristics" }, { "paragraph_id": 18, "text": "Like all actinides, einsteinium is rather reactive. Its trivalent oxidation state is most stable in solids and aqueous solution where it induces a pale pink color. The existence of divalent einsteinium is firmly established, especially in the solid phase; such +2 state is not observed in many other actinides, including protactinium, uranium, neptunium, plutonium, curium and berkelium. Einsteinium(II) compounds can be obtained, for example, by reducing einsteinium(III) with samarium(II) chloride. The oxidation state +4 was postulated from vapor studies and is as yet uncertain.", "title": "Characteristics" }, { "paragraph_id": 19, "text": "Nineteen isotopes and three nuclear isomers are known for einsteinium, with mass numbers ranging from 240 to 257. All are radioactive and the most stable nuclide, Es, has a half-life of 471.7 days. The next most stable isotopes are Es (half-life 275.7 days), Es (39.8 days), and Es (20.47 days). All of the remaining isotopes have half-lives shorter than 40 hours, most shorter than 30 minutes. Of the three nuclear isomers, the most stable is Es with a half-life of 39.3 hours.", "title": "Characteristics" }, { "paragraph_id": 20, "text": "Einsteinium has a high rate of nuclear fission that results in a low critical mass for a sustained nuclear chain reaction. This mass is 9.89 kilograms for a bare sphere of Es isotope, and can be lowered to 2.9 kilograms by adding a 30-centimeter-thick steel neutron reflector, or even to 2.26 kilograms with a 20-cm-thick reflector made of water. However, even this small critical mass greatly exceeds the total amount of einsteinium isolated thus far, especially of the rare Es isotope.", "title": "Characteristics" }, { "paragraph_id": 21, "text": "Because of the short half-life of all isotopes of einsteinium, any primordial einsteinium—that is, einsteinium that could have been present on the Earth at its formation—has long since decayed. Synthesis of einsteinium from naturally-occurring actinides uranium and thorium in the Earth's crust requires multiple neutron capture, which is an extremely unlikely event. Therefore, all terrestrial einsteinium is produced in scientific laboratories, high-power nuclear reactors, or in nuclear weapons tests, and exists only within a few years from the time of the synthesis.", "title": "Characteristics" }, { "paragraph_id": 22, "text": "The transuranic elements from americium to fermium, including einsteinium, were once created in the natural nuclear fission reactor at Oklo, but no longer.", "title": "Characteristics" }, { "paragraph_id": 23, "text": "Einsteinium was theoretically observed in the spectrum of Przybylski's Star. However, the lead author of the studies finding einsteinium and other short-lived actinides in Przybylski's Star, Vera F. Gopka, admitted that \"the position of lines of the radioactive elements under search were simply visualized in synthetic spectrum as vertical markers because there are not any atomic data for these lines except for their wavelengths (Sansonetti et al. 2004), enabling one to calculate their profiles with more or less real intensities.\" The signature spectra of einsteinium's isotopes have since been comprehensively analyzed experimentally (in 2021), though there is no published research confirming whether the theorized einsteinium signatures proposed to be found in the star's spectrum match the lab-determined results.", "title": "Characteristics" }, { "paragraph_id": 24, "text": "Einsteinium is produced in minute quantities by bombarding lighter actinides with neutrons in dedicated high-flux nuclear reactors. The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, U.S., and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium (Z > 96) elements. These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not widely reported. In a \"typical processing campaign\" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium (Bk) and einsteinium and picogram quantities of fermium.", "title": "Synthesis and extraction" }, { "paragraph_id": 25, "text": "The first microscopic sample of Es sample weighing about 10 nanograms was prepared in 1961 at HFIR. A special magnetic balance was designed to estimate its weight. Larger batches were produced later starting from several kilograms of plutonium with the einsteinium yields (mostly Es) of 0.48 milligrams in 1967–1970, 3.2 milligrams in 1971–1973, followed by steady production of about 3 milligrams per year between 1974 and 1978. These quantities however refer to the integral amount in the target right after irradiation. Subsequent separation procedures reduced the amount of isotopically pure einsteinium roughly tenfold.", "title": "Synthesis and extraction" }, { "paragraph_id": 26, "text": "Heavy neutron irradiation of plutonium results in four major isotopes of einsteinium: Es (α-emitter with half-life of 20.47 days and with a spontaneous fission half-life of 7×10 years); Es (β-emitter with half-life of 39.3 hours), Es (α-emitter with half-life of about 276 days) and Es (β-emitter with half-life of 39.8 days). An alternative route involves bombardment of uranium-238 with high-intensity nitrogen or oxygen ion beams.", "title": "Synthesis and extraction" }, { "paragraph_id": 27, "text": "Einsteinium-247 (half-life 4.55 minutes) was produced by irradiating americium-241 with carbon or uranium-238 with nitrogen ions. The latter reaction was first realized in 1967 in Dubna, Russia, and the involved scientists were awarded the Lenin Komsomol Prize.", "title": "Synthesis and extraction" }, { "paragraph_id": 28, "text": "The isotope Es was produced by irradiating Cf with deuterium ions. It mainly decays by emission of electrons to Cf with a half-life of 25±5 minutes, but also releases α-particles of 6.87 MeV energy, with the ratio of electrons to α-particles of about 400.", "title": "Synthesis and extraction" }, { "paragraph_id": 29, "text": "The heavier isotopes Es, Es, Es and Es were obtained by bombarding Bk with α-particles. One to four neutrons are liberated in this process making possible the formation of four different isotopes in one reaction.", "title": "Synthesis and extraction" }, { "paragraph_id": 30, "text": "Einsteinium-253 was produced by irradiating a 0.1–0.2 milligram Cf target with a thermal neutron flux of (2–5)×10 neutrons·cm·s for 500–900 hours:", "title": "Synthesis and extraction" }, { "paragraph_id": 31, "text": "In 2020, scientists at the Oak Ridge National Laboratory were able to create 233 nanograms of Es, a new world record. This allowed some chemical properties of the element to be studied for the first time.", "title": "Synthesis and extraction" }, { "paragraph_id": 32, "text": "The analysis of the debris at the 10-megaton Ivy Mike nuclear test was a part of long-term project. One of the goals of which was studying the efficiency of production of transuranium elements in high-power nuclear explosions. The motivation for these experiments was that synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful man-made neutron sources, providing densities of the order 10 neutrons/cm within a microsecond, or about 10 neutrons/(cm·s). In comparison, the flux of the HFIR reactor is 5×10 neutrons/(cm·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the mainland U.S. The laboratory was receiving samples for analysis as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, none of these were found even after a series of megaton explosions conducted between 1954 and 1956 at the atoll.", "title": "Synthesis and extraction" }, { "paragraph_id": 33, "text": "The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions conducted in confined space might result in improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge, but they were less successful in terms of yield and was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Product isolation was problematic as the explosions were spreading debris through melting and vaporizing the surrounding rocks at depths of 300–600 meters. Drilling to such depths to extract the products was both slow and inefficient in terms of collected volumes.", "title": "Synthesis and extraction" }, { "paragraph_id": 34, "text": "Among the nine underground tests that were carried between 1962 and 1969, the last one was the most powerful and had the highest yield of transuranium elements. Milligrams of einsteinium that would normally take a year of irradiation in a high-power reactor, were produced within a microsecond. However, the major practical problem of the entire proposal was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only about 4×10 of the total amount, and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only about 1×10 of the total charge. The amount of transuranium elements in this 500-kg batch was only 30 times higher than in a 0.4 kg rock picked up 7 days after the test which demonstrated the highly non-linear dependence of the transuranium elements yield on the amount of retrieved radioactive rock. Shafts were drilled at the site before the test in order to accelerate sample collection after explosion, so that explosion would expel radioactive material from the epicenter through the shafts and to collecting volumes near the surface. This method was tried in two tests and instantly provided hundreds kilograms of material, but with actinide concentration 3 times lower than in samples obtained after drilling. Whereas such method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides.", "title": "Synthesis and extraction" }, { "paragraph_id": 35, "text": "Although no new elements (apart from einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranium elements were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories.", "title": "Synthesis and extraction" }, { "paragraph_id": 36, "text": "Separation procedure of einsteinium depends on the synthesis method. In the case of light-ion bombardment inside a cyclotron, the heavy ion target is attached to a thin foil, and the generated einsteinium is simply washed off the foil after the irradiation. However, the produced amounts in such experiments are relatively low. The yields are much higher for reactor irradiation, but there, the product is a mixture of various actinide isotopes, as well as lanthanides produced in the nuclear fission decays. In this case, isolation of einsteinium is a tedious procedure which involves several repeating steps of cation exchange, at elevated temperature and pressure, and chromatography. Separation from berkelium is important, because the most common einsteinium isotope produced in nuclear reactors, Es, decays with a half-life of only 20 days to Bk, which is fast on the timescale of most experiments. Such separation relies on the fact that berkelium easily oxidizes to the solid +4 state and precipitates, whereas other actinides, including einsteinium, remain in their +3 state in solutions.", "title": "Synthesis and extraction" }, { "paragraph_id": 37, "text": "Separation of trivalent actinides from lanthanide fission products can be done by a cation-exchange resin column using a 90% water/10% ethanol solution saturated with hydrochloric acid (HCl) as eluant. It is usually followed by anion-exchange chromatography using 6 molar HCl as eluant. A cation-exchange resin column (Dowex-50 exchange column) treated with ammonium salts is then used to separate fractions containing elements 99, 100 and 101. These elements can be then identified simply based on their elution position/time, using α-hydroxyisobutyrate solution (α-HIB), for example, as eluant.", "title": "Synthesis and extraction" }, { "paragraph_id": 38, "text": "Separation of the 3+ actinides can also be achieved by solvent extraction chromatography, using bis-(2-ethylhexyl) phosphoric acid (abbreviated as HDEHP) as the stationary organic phase, and nitric acid as the mobile aqueous phase. The actinide elution sequence is reversed from that of the cation-exchange resin column. The einsteinium separated by this method has the advantage to be free of organic complexing agent, as compared to the separation using a resin column.", "title": "Synthesis and extraction" }, { "paragraph_id": 39, "text": "Einsteinium is highly reactive and therefore strong reducing agents are required to obtain the pure metal from its compounds. This can be achieved by reduction of einsteinium(III) fluoride with metallic lithium:", "title": "Synthesis and extraction" }, { "paragraph_id": 40, "text": "However, owing to its low melting point and high rate of self-radiation damage, einsteinium has high vapor pressure, which is higher than that of lithium fluoride. This makes this reduction reaction rather inefficient. It was tried in the early preparation attempts and quickly abandoned in favor of reduction of einsteinium(III) oxide with lanthanum metal:", "title": "Synthesis and extraction" }, { "paragraph_id": 41, "text": "Einsteinium(III) oxide (Es2O3) was obtained by burning einsteinium(III) nitrate. It forms colorless cubic crystals, which were first characterized from microgram samples sized about 30 nanometers. Two other phases, monoclinic and hexagonal, are known for this oxide. The formation of a certain Es2O3 phase depends on the preparation technique and sample history, and there is no clear phase diagram. Interconversions between the three phases can occur spontaneously, as a result of self-irradiation or self-heating. The hexagonal phase is isotypic with lanthanum oxide where the Es ion is surrounded by a 6-coordinated group of O ions.", "title": "Chemical compounds" }, { "paragraph_id": 42, "text": "Einsteinium halides are known for the oxidation states +2 and +3. The most stable state is +3 for all halides from fluoride to iodide.", "title": "Chemical compounds" }, { "paragraph_id": 43, "text": "Einsteinium(III) fluoride (EsF3) can be precipitated from einsteinium(III) chloride solutions upon reaction with fluoride ions. An alternative preparation procedure is to exposure einsteinium(III) oxide to chlorine trifluoride (ClF3) or F2 gas at a pressure of 1–2 atmospheres and a temperature between 300 and 400 °C. The EsF3 crystal structure is hexagonal, as in californium(III) fluoride (CfF3) where the Es ions are 8-fold coordinated by fluorine ions in a bicapped trigonal prism arrangement.", "title": "Chemical compounds" }, { "paragraph_id": 44, "text": "Einsteinium(III) chloride (EsCl3) can be prepared by annealing einsteinium(III) oxide in the atmosphere of dry hydrogen chloride vapors at about 500 °C for some 20 minutes. It crystallizes upon cooling at about 425 °C into an orange solid with a hexagonal structure of UCl3 type, where einsteinium atoms are 9-fold coordinated by chlorine atoms in a tricapped trigonal prism geometry. Einsteinium(III) bromide (EsBr3) is a pale-yellow solid with a monoclinic structure of AlCl3 type, where the einsteinium atoms are octahedrally coordinated by bromine (coordination number 6).", "title": "Chemical compounds" }, { "paragraph_id": 45, "text": "The divalent compounds of einsteinium are obtained by reducing the trivalent halides with hydrogen:", "title": "Chemical compounds" }, { "paragraph_id": 46, "text": "Einsteinium(II) chloride (EsCl2), einsteinium(II) bromide (EsBr2), and einsteinium(II) iodide (EsI2) have been produced and characterized by optical absorption, with no structural information available yet.", "title": "Chemical compounds" }, { "paragraph_id": 47, "text": "Known oxyhalides of einsteinium include EsOCl, EsOBr and EsOI. These salts are synthesized by treating a trihalide with a vapor mixture of water and the corresponding hydrogen halide: for example, EsCl3 + H2O/HCl to obtain EsOCl.", "title": "Chemical compounds" }, { "paragraph_id": 48, "text": "The high radioactivity of einsteinium has a potential use in radiation therapy, and organometallic complexes have been synthesized in order to deliver einsteinium atoms to an appropriate organ in the body. Experiments have been performed on injecting einsteinium citrate (as well as fermium compounds) to dogs. Einsteinium(III) was also incorporated into beta-diketone chelate complexes, since analogous complexes with lanthanides previously showed strongest UV-excited luminescence among metallorganic compounds. When preparing einsteinium complexes, the Es ions were 1000 times diluted with Gd ions. This allowed reducing the radiation damage so that the compounds did not disintegrate during the period of 20 minutes required for the measurements. The resulting luminescence from Es was much too weak to be detected. This was explained by the unfavorable relative energies of the individual constituents of the compound that hindered efficient energy transfer from the chelate matrix to Es ions. Similar conclusion was drawn for other actinides americium, berkelium and fermium.", "title": "Chemical compounds" }, { "paragraph_id": 49, "text": "Luminescence of Es ions was however observed in inorganic hydrochloric acid solutions as well as in organic solution with di(2-ethylhexyl)orthophosphoric acid. It shows a broad peak at about 1064 nanometers (half-width about 100 nm) which can be resonantly excited by green light (ca. 495 nm wavelength). The luminescence has a lifetime of several microseconds and the quantum yield below 0.1%. The relatively high, compared to lanthanides, non-radiative decay rates in Es were associated with the stronger interaction of f-electrons with the inner Es electrons.", "title": "Chemical compounds" }, { "paragraph_id": 50, "text": "There is almost no use for any isotope of einsteinium outside basic scientific research aiming at production of higher transuranium elements and superheavy elements.", "title": "Applications" }, { "paragraph_id": 51, "text": "In 1955, mendelevium was synthesized by irradiating a target consisting of about 10 atoms of Es in the 60-inch cyclotron at Berkeley Laboratory. The resulting Es(α,n)Md reaction yielded 17 atoms of the new element with the atomic number of 101.", "title": "Applications" }, { "paragraph_id": 52, "text": "The rare isotope Es is favored for production of superheavy elements because of its large mass, relatively long half-life of 270 days, and availability in significant amounts of several micrograms. Hence Es was used as a target in the attempted synthesis of ununennium (element 119) in 1985 by bombarding it with calcium-48 ions at the superHILAC linear particle accelerator at Berkeley, California. No atoms were identified, setting an upper limit for the cross section of this reaction at 300 nanobarns.", "title": "Applications" }, { "paragraph_id": 53, "text": "Es was used as the calibration marker in the chemical analysis spectrometer (\"alpha-scattering surface analyzer\") of the Surveyor 5 lunar probe. The large mass of this isotope reduced the spectral overlap between signals from the marker and the studied lighter elements of the lunar surface.", "title": "Applications" }, { "paragraph_id": 54, "text": "Most of the available einsteinium toxicity data, is from research on animals. Upon ingestion by rats, only ~0.01% of it ends in the bloodstream. From there, about 65% goes to the bones, where it would remain for ~50 years if not for its radioactive decay, not to speak of the 3-year maximum lifespan of rats, 25% to the lungs (biological half-life ~20 years, though this is again rendered irrelevant by the short half-life of einsteinium), 0.035% to the testicles or 0.01% to the ovaries – where einsteinium stays indefinitely. About 10% of the ingested amount is excreted. The distribution of einsteinium over bone surfaces is uniform and is similar to that of plutonium.", "title": "Safety" } ]
Einsteinium is a synthetic chemical element; it has symbol Es and atomic number 99. Einsteinium is a member of the actinide series and it is the seventh transuranium element. It was named in honor of Albert Einstein. Einsteinium was discovered as a component of the debris of the first hydrogen bomb explosion in 1952. Its most common isotope, einsteinium-253, is produced artificially from decay of californium-253 in a few dedicated high-power nuclear reactors with a total yield on the order of one milligram per year. The reactor synthesis is followed by a complex process of separating einsteinium-253 from other actinides and products of their decay. Other isotopes are synthesized in various laboratories, but in much smaller amounts, by bombarding heavy actinide elements with light ions. Due to the small amounts of produced einsteinium and the short half-life of its most common isotope, there are no practical applications for it except basic scientific research. In particular, einsteinium was used to synthesize, for the first time, 17 atoms of the new element mendelevium in 1955. Einsteinium is a soft, silvery, paramagnetic metal. Its chemistry is typical of the late actinides, with a preponderance of the +3 oxidation state; the +2 oxidation state is also accessible, especially in solids. The high radioactivity of einsteinium-253 produces a visible glow and rapidly damages its crystalline metal lattice, with released heat of about 1000 watts per gram. Difficulty in studying its properties is due to einsteinium-253's decay to berkelium-249 and then californium-249 at a rate of about 3% per day. The isotope of einsteinium with the longest half-life, einsteinium-252 would be more suitable for investigation of physical properties, but it has proven far more difficult to produce and is available only in minute quantities, and not in bulk. Einsteinium is the element with the highest atomic number which has been observed in macroscopic quantities in its pure form as einsteinium-253. Like all synthetic transuranium elements, isotopes of einsteinium are very radioactive and are considered highly dangerous to health on ingestion.
2001-05-17T14:41:03Z
2023-12-11T12:21:21Z
[ "Template:Cite web", "Template:Albert Einstein", "Template:Val", "Template:Nuclide", "Template:E", "Template:ISBN", "Template:Einsteinium compounds", "Template:Good article", "Template:NUBASE2016", "Template:RubberBible86th", "Template:Cite journal", "Template:Cite book", "Template:Citation", "Template:Wiktionary", "Template:Infobox einsteinium", "Template:Main", "Template:Reflist", "Template:Webarchive", "Template:Commons", "Template:Clear", "Template:Periodic table (navbox)", "Template:Authority control", "Template:Overline" ]
https://en.wikipedia.org/wiki/Einsteinium
9,480
Edmund Stoiber
Edmund Rüdiger Stoiber (born 28 September 1941) is a German politician who served as the 16th Minister President of the state of Bavaria between 1993 and 2007 and chairman of the Christian Social Union (CSU) between 1999 and 2007. In 2002, he ran for the office of Chancellor of Germany in the federal election, but in one of the narrowest elections in German history lost against Gerhard Schröder. On 18 January 2007, he announced his decision to step down from the posts of minister-president and party chairman by 30 September, after having been under fire in his own party for weeks. Stoiber was born in Oberaudorf in the district of Rosenheim in Bavaria. Prior to entering politics in 1974 and serving in the Bavarian Parliament, he was a lawyer and worked at the University of Regensburg. Stoiber attended the Ignaz-Günther-Gymnasium in Rosenheim, where he received his Abitur (high school diploma) in 1961, although he had to repeat one year for failing Latin. His military service was with the 1st Gebirgsdivision (mountain infantry division) in Mittenwald and Bad Reichenhall and was cutshort due to a knee injury. Stoiber then studied political science and (from the fall of 1962) law at the Ludwig-Maximilians-Universität München. In 1967, he passed the state law exam and then worked at the University of Regensburg in criminal law and Eastern European law. He received a doctorate in jurisprudence, and then in 1971 passed the second state examination with distinction. In 1971, Stoiber joined the Bavarian State Ministry of Development and Environment. In 1978, Stoiber was elected secretary general of the CSU, a post he held until 1982/83. In this capacity, he served as campaign manager of Franz-Josef Strauss, the first Bavarian leader to run for the chancellorship, in the 1980 national elections. From 1982 to 1986 he served as deputy to the Bavarian secretary of the state and then, in the position of State Minister, led the State Chancellery from 1982 to 1988. From 1988 to 1993 he served as State Minister of the Interior. In May 1993, the Landtag of Bavaria, the state's parliament, elected Stoiber as Minister-President succeeding Max Streibl. He came to power amid a political crisis involving a sex scandal, surrounding a contender for the state premiership. Upon taking office, he nominated Strauss' daughter Monika Hohlmeier as State Minister for Education and Cultural Affairs. In his capacity as Minister-President, Stoiber served as President of the Bundesrat in 1995/96. In 1998, he also succeeded Theo Waigel as chairman of the CSU. During Stoiber's 14 years leading Bavaria, the state solidified its position as one of Germany's richest. Already by 1998, under his leadership, the state had privatized more than $3 billion worth of state-owned businesses and used that money to invest in new infrastructure and provide venture capital for new companies. He was widely regarded a central figure in building one of Europe's most powerful regional economies, attracting thousands of hi-tech, engineering and media companies and reducing unemployment to half the national average. In 2002, Stoiber politically outmaneuvered CDU chairwoman, Angela Merkel, and was declared the CDU/CSU's candidate for the office of chancellor by practically the entire leadership of the CSU's sister party CDU, challenging Gerhard Schröder. At that time, Merkel had generally been seen as a transitional chair and was strongly opposed by the CDU's male leaders, often called the party's "crown princes". In the run up to the 2002 national elections, the CSU/CDU held a huge lead in the opinion polls and Stoiber famously remarked that "... this election is like a football match where it's the second half and my team is ahead by 2–0." However, on election day things had changed. The SPD had mounted a huge comeback, and the CDU/CSU was narrowly defeated (though both the SPD and CDU/CSU had 38.5% of the vote, the SPD was ahead by a small 6,000 vote margin, winning 251 seats to the CDU/CSU's 248). The election was one of modern Germany's closest votes. Gerhard Schröder was re-elected as chancellor by the parliament in a coalition with the Greens, who had increased their vote share marginally. Many commentators faulted Stoiber's reaction to the floods in eastern Germany, in the run-up to the election, as a contributory factor in his party's poor electoral result and defeat. In addition, Schröder distinguished himself from his opponent by taking an active stance against the upcoming United States-led Iraq War. His extensive campaigning on this stance was widely seen as swinging the election to the SPD in the weeks running up to the election. Stoiber subsequently led the CSU to an absolute majority in the 2003 Bavarian state elections, for the third time in a row, winning this time 60.7% of the votes and a two-thirds majority in the Landtag. This was the widest margin ever achieved by a German party in any state. Between 2003 and 2004, Stoiber served as co-chair (alongside Franz Müntefering) of the First Commission on the modernization of the federal state (Föderalismuskommission I), which had been established to reform the division of powers between federal and state authorities in Germany. In February 2004, he became a candidate of Jacques Chirac and Gerhard Schröder for the presidency of the European Commission but he decided not to run for this office. Stoiber had ambitions to run again for the chancellorship, but Merkel secured the nomination, and in November 2005 she won the general election. He was slated to join Merkel's first grand coalition cabinet as Economics minister. However, on 1 November 2005, he announced his decision to stay in Bavaria, due to personnel changes on the SPD side of the coalition (Franz Müntefering resigned as SPD chairman) and an unsatisfactory apportionment of competences between himself and designated Science minister Annette Schavan. Stoiber also resigned his seat in the 16th Bundestag, being a member from 18 October to 8 November. Subsequently, criticism grew in the CSU, where other politicians had to scale back their ambitions after Stoiber's decision to stay in Bavaria. On 18 January 2007, he announced his decision to stand down from the posts of minister-president and party chairman by 30 September. Günther Beckstein, then Bavarian state minister of the interior, succeeded him as minister-president and Erwin Huber as party chairman, defeating Horst Seehofer at a convention at 18 September 2007 with 58,1% of the votes. Both Beckstein and Huber resigned after the 2008 state elections, in which the CSU vote dropped to 43,4% and the party had to form a coalition with another party for the first time since 1966. Stoiber was first appointed in 2007 as a special adviser to European Commission President José Manuel Barroso to chair the "High level group on administrative burdens", made up of national experts, NGOs, business and industry organizations. Quickly nicknamed the "Stoiber Group", it produced a report in July 2014 with several proposals on streamlining the regulatory process. Stoiber was re-appointed in December 2014 by Jean-Claude Juncker to the same role, from which he resigned after one year in late 2015. Since his retirement from German politics in 2007, Stoiber has worked as a lawyer and held paid and unpaid positions, including: Stoiber was a CSU delegate to the Federal Convention for the purpose of electing the President of Germany in 2017. In his capacity as Minister-President, Stoiber made 58 foreign trips, including to China (1995, 2003), Israel (2001), Egypt (2001), India (2004, 2007) and South Korea (2007). In 2002, Stoiber publicly expressed support for the United States in their policy toward Iraq. During his election campaign, he made clear his opposition to war, and his support for the introduction of weapons inspectors to Iraq without preconditions as a way of avoiding war, and he criticized Schröder for harming the German-American alliance by not calling President George W. Bush and discussing the issue privately. He also attacked German Foreign Minister Joschka Fischer for his criticism of the U.S. position. Stoiber is known for backing Vladimir Putin and there have been comparisons to Gerhard Schröder. One author called Stoiber a "Moscow's Trojan Horse". Putin is known to have given Stoiber "extreme forms of flattery" and privileges such as a private dinner at Putin's residence outside Moscow. Stoiber has been said to be skeptical of Germany's decision to adopt the euro. In 1997, he joined the ministers-president of two other German states, Kurt Biedenkopf and Gerhard Schröder, in making the case for a five-year delay in Europe's currency union. When the European Commission recommended that Greece be allowed to join the eurozone in 1998, he demanded that the country be barred from adopting the common currency for several years instead. He is a staunch opponent of Turkey's integration into the European Union, claiming that its non-Christian culture would dilute the Union. At the same time, Stoiber has repeatedly insisted he is a "good European" who is keen, for instance, on forging an EU-wide foreign policy, replete with a single European army. Earlier, in 1993, he had told German newspapers: "I want a simple confederation. That means the nation-states maintain their dominant role, at least as far as internal matters are concerned." While the conservative wing of the German political spectrum, primarily formed of the CDU and CSU, enjoys considerable support, this support tends to be less extended to Stoiber. He enjoys considerably more support in his home state of Bavaria than in the rest of Germany, where CDU chairwoman Angela Merkel is more popular. This has its reasons: Merkel supports a kind of fiscal conservatism, but a more liberal social policy. Stoiber, on the other hand, favors a more conservative approach to both fiscal and social matters, and while this ensures him the religious vote, strongest in Bavaria, it has weakened his support at the national level. In 2005, Stoiber successfully lobbied Novartis, the Swiss pharmaceuticals group, to move the headquarters of its Sandoz subsidiary to Munich, making it one of Europe's highest-profile corporate relocations that year as well as a significant boost to Stoiber's attempts to build up Bavaria as a pharmaceuticals and biotechnology center. During his time as Minister-President of Bavaria, Stoiber pushed for the construction of a roughly 40-kilometer high-speed magnetic-levitation link from Munich's main station to its airport, to be built by Transrapid International, a consortium including ThyssenKrupp and Munich-based Siemens. After he left office, the German federal government abandoned the plans in 2008 because of spiraling costs of as much as €3.4 billion. Stoiber, as a minister in the state of Bavaria, was widely known for advocating a reduction in the number of asylum seekers Germany accepts, something that prompted critics to label him xenophobic, anti-Turkish and anti-Islam. In the late 1990s, he criticized the incoming Chancellor Gerhard Schröder for saying that he would work hard in the interest of Germans and people living in Germany. Stoiber's remarks drew heavy criticism in the press. When Germany's Federal Constitutional Court decided in 1995 that a Bavarian law requiring a crucifix to be hung in each of the state's 40,000 classrooms was unconstitutional, Stoiber said he would not order the removal of crucifixes "for the time being", and asserted that he was under no obligation to remove them in schools where parents unanimously opposed such action. During his 2002 election campaign, Stoiber indicated he would not ban same-sex marriages—sanctioned by the Schröder government—a policy he had vehemently objected to when it was introduced. Stoiber has been a staunch advocate of changes in German law that would give more power to owners of private TV channels. In 1995, he publicly called for the abolition of Germany's public television service ARD and a streamlining of its regional services, adding that he and Minister-President Kurt Biedenkopf of Saxony would break the contract ARD has with regional governments if reforms were not undertaken. However, when European Commissioner for Competition Karel van Miert unveiled ideas for reforming the rules governing the financing of public service broadcasters in 1998, Stoiber led the way in rejecting moves to reform established practice. During the run-up to the German general election in 2005, which was held ahead of schedule, Stoiber created controversy through a campaign speech held in the beginning of August 2005 in the federal state of Baden-Württemberg. He said, "I do not accept that the East [of Germany] will again decide who will be Germany's chancellor. It cannot be allowed that the frustrated determine Germany's fate." People in the new federal states of Germany (the former German Democratic Republic) were offended by Stoiber's remarks. While the CSU attempted to portray them as "misinterpreted", Stoiber created further controversy when he claimed that "if it was like Bavaria everywhere, there wouldn't be any problems. But unfortunately, ladies and gentlemen, we have everywhere some sections of the populace not as intelligent as in Bavaria." The tone of the comments was exacerbated by a perception by some within Germany of the state of Bavaria as "arrogant". Many, including members of the CDU, attribute Stoiber's comments and behavior as a contributing factor to the CDU's losses in the 2005 general election. He was accused by many in the CDU/CSU of offering "half-hearted" support to Angela Merkel, with some even accusing him of being reluctant to support a female candidate from the East. (This also contrasted unfavorably with Merkel's robust support for his candidacy in the 2002 election.) He has insinuated that votes were lost because of the choice of a female candidate. He came under heavy fire for these comments from press and politicians alike, especially since he himself lost almost 10% of the Bavarian vote—a dubious feat in itself as Bavarians tend to consistently vote conservatively. Nonetheless, a poll has suggested over 9% may have voted differently if the conservative candidate was a man from the West, although this does not clearly show if such a candidate would have gained or lost votes for the conservatives. When the Croatian National Bank turned down BayernLB's original bid to take over the local arm of Hypo Alpe-Adria-Bank International, this drew strong criticism from Stoiber, who said the decision was "unacceptable" and a "severe strain" for Bavaria's relations with Croatia. Croatia was seeking to join the European Union at the time. The central bank's board later reviewed and accepted BayernLB's offer of 1.6 billion euros. The investment in Hypo Group Alpe Adria was part of a series of ill-fated investments, which later forced BayernLB to take a 10 billion-euro bailout in the financial crisis. In September 2015, Emily O'Reilly, the European Ombudsman, received a complaint from two NGOs, Corporate Europe Observatory and Friends of the Earth, according to which Stoiber's appointment as special adviser on the commission's better regulation agenda broke internal rules on appointments. Stoiber is Roman Catholic. He is married to Karin Stoiber. They have three children: Constanze (born 1971, married Hausmann), Veronica (born 1977, married Saß), Dominic (born 1980) and five grandchildren: Johannes (1999), Benedikt (2001), Theresa Marie (2005), Ferdinand (2009) and another grandson (2011). Stoiber is a keen football fan and operative. In his youth, he played for local football side BCF Wolfratshausen. Stoiber serves as Member of the Supervisory Board of FC Bayern München AG (the stock corporation that runs the professional football section) and Chairman of the Administrative Advisory Board of FC Bayern Munich e.V. (the club that owns the majority of the club corporation). Before the 2002 election, FC Bayern general manager Uli Hoeneß expressed his support for Stoiber and the CSU. Football legend, former FC Bayern president and DFB vice president Franz Beckenbauer showed his support for Stoiber by letting him join the Germany national football team on their flight home from Japan after the 2002 FIFA World Cup.
[ { "paragraph_id": 0, "text": "Edmund Rüdiger Stoiber (born 28 September 1941) is a German politician who served as the 16th Minister President of the state of Bavaria between 1993 and 2007 and chairman of the Christian Social Union (CSU) between 1999 and 2007. In 2002, he ran for the office of Chancellor of Germany in the federal election, but in one of the narrowest elections in German history lost against Gerhard Schröder. On 18 January 2007, he announced his decision to step down from the posts of minister-president and party chairman by 30 September, after having been under fire in his own party for weeks.", "title": "" }, { "paragraph_id": 1, "text": "Stoiber was born in Oberaudorf in the district of Rosenheim in Bavaria. Prior to entering politics in 1974 and serving in the Bavarian Parliament, he was a lawyer and worked at the University of Regensburg.", "title": "Early life" }, { "paragraph_id": 2, "text": "Stoiber attended the Ignaz-Günther-Gymnasium in Rosenheim, where he received his Abitur (high school diploma) in 1961, although he had to repeat one year for failing Latin. His military service was with the 1st Gebirgsdivision (mountain infantry division) in Mittenwald and Bad Reichenhall and was cutshort due to a knee injury. Stoiber then studied political science and (from the fall of 1962) law at the Ludwig-Maximilians-Universität München. In 1967, he passed the state law exam and then worked at the University of Regensburg in criminal law and Eastern European law. He received a doctorate in jurisprudence, and then in 1971 passed the second state examination with distinction.", "title": "Education and profession" }, { "paragraph_id": 3, "text": "In 1971, Stoiber joined the Bavarian State Ministry of Development and Environment.", "title": "Education and profession" }, { "paragraph_id": 4, "text": "In 1978, Stoiber was elected secretary general of the CSU, a post he held until 1982/83. In this capacity, he served as campaign manager of Franz-Josef Strauss, the first Bavarian leader to run for the chancellorship, in the 1980 national elections. From 1982 to 1986 he served as deputy to the Bavarian secretary of the state and then, in the position of State Minister, led the State Chancellery from 1982 to 1988. From 1988 to 1993 he served as State Minister of the Interior.", "title": "Political career" }, { "paragraph_id": 5, "text": "In May 1993, the Landtag of Bavaria, the state's parliament, elected Stoiber as Minister-President succeeding Max Streibl. He came to power amid a political crisis involving a sex scandal, surrounding a contender for the state premiership. Upon taking office, he nominated Strauss' daughter Monika Hohlmeier as State Minister for Education and Cultural Affairs.", "title": "Political career" }, { "paragraph_id": 6, "text": "In his capacity as Minister-President, Stoiber served as President of the Bundesrat in 1995/96. In 1998, he also succeeded Theo Waigel as chairman of the CSU.", "title": "Political career" }, { "paragraph_id": 7, "text": "During Stoiber's 14 years leading Bavaria, the state solidified its position as one of Germany's richest. Already by 1998, under his leadership, the state had privatized more than $3 billion worth of state-owned businesses and used that money to invest in new infrastructure and provide venture capital for new companies. He was widely regarded a central figure in building one of Europe's most powerful regional economies, attracting thousands of hi-tech, engineering and media companies and reducing unemployment to half the national average.", "title": "Political career" }, { "paragraph_id": 8, "text": "In 2002, Stoiber politically outmaneuvered CDU chairwoman, Angela Merkel, and was declared the CDU/CSU's candidate for the office of chancellor by practically the entire leadership of the CSU's sister party CDU, challenging Gerhard Schröder. At that time, Merkel had generally been seen as a transitional chair and was strongly opposed by the CDU's male leaders, often called the party's \"crown princes\".", "title": "Political career" }, { "paragraph_id": 9, "text": "In the run up to the 2002 national elections, the CSU/CDU held a huge lead in the opinion polls and Stoiber famously remarked that \"... this election is like a football match where it's the second half and my team is ahead by 2–0.\" However, on election day things had changed. The SPD had mounted a huge comeback, and the CDU/CSU was narrowly defeated (though both the SPD and CDU/CSU had 38.5% of the vote, the SPD was ahead by a small 6,000 vote margin, winning 251 seats to the CDU/CSU's 248). The election was one of modern Germany's closest votes.", "title": "Political career" }, { "paragraph_id": 10, "text": "Gerhard Schröder was re-elected as chancellor by the parliament in a coalition with the Greens, who had increased their vote share marginally. Many commentators faulted Stoiber's reaction to the floods in eastern Germany, in the run-up to the election, as a contributory factor in his party's poor electoral result and defeat. In addition, Schröder distinguished himself from his opponent by taking an active stance against the upcoming United States-led Iraq War. His extensive campaigning on this stance was widely seen as swinging the election to the SPD in the weeks running up to the election.", "title": "Political career" }, { "paragraph_id": 11, "text": "Stoiber subsequently led the CSU to an absolute majority in the 2003 Bavarian state elections, for the third time in a row, winning this time 60.7% of the votes and a two-thirds majority in the Landtag. This was the widest margin ever achieved by a German party in any state.", "title": "Political career" }, { "paragraph_id": 12, "text": "Between 2003 and 2004, Stoiber served as co-chair (alongside Franz Müntefering) of the First Commission on the modernization of the federal state (Föderalismuskommission I), which had been established to reform the division of powers between federal and state authorities in Germany. In February 2004, he became a candidate of Jacques Chirac and Gerhard Schröder for the presidency of the European Commission but he decided not to run for this office.", "title": "Political career" }, { "paragraph_id": 13, "text": "Stoiber had ambitions to run again for the chancellorship, but Merkel secured the nomination, and in November 2005 she won the general election. He was slated to join Merkel's first grand coalition cabinet as Economics minister. However, on 1 November 2005, he announced his decision to stay in Bavaria, due to personnel changes on the SPD side of the coalition (Franz Müntefering resigned as SPD chairman) and an unsatisfactory apportionment of competences between himself and designated Science minister Annette Schavan. Stoiber also resigned his seat in the 16th Bundestag, being a member from 18 October to 8 November.", "title": "Political career" }, { "paragraph_id": 14, "text": "Subsequently, criticism grew in the CSU, where other politicians had to scale back their ambitions after Stoiber's decision to stay in Bavaria. On 18 January 2007, he announced his decision to stand down from the posts of minister-president and party chairman by 30 September. Günther Beckstein, then Bavarian state minister of the interior, succeeded him as minister-president and Erwin Huber as party chairman, defeating Horst Seehofer at a convention at 18 September 2007 with 58,1% of the votes. Both Beckstein and Huber resigned after the 2008 state elections, in which the CSU vote dropped to 43,4% and the party had to form a coalition with another party for the first time since 1966.", "title": "Political career" }, { "paragraph_id": 15, "text": "Stoiber was first appointed in 2007 as a special adviser to European Commission President José Manuel Barroso to chair the \"High level group on administrative burdens\", made up of national experts, NGOs, business and industry organizations. Quickly nicknamed the \"Stoiber Group\", it produced a report in July 2014 with several proposals on streamlining the regulatory process. Stoiber was re-appointed in December 2014 by Jean-Claude Juncker to the same role, from which he resigned after one year in late 2015.", "title": "Life after politics" }, { "paragraph_id": 16, "text": "Since his retirement from German politics in 2007, Stoiber has worked as a lawyer and held paid and unpaid positions, including:", "title": "Life after politics" }, { "paragraph_id": 17, "text": "Stoiber was a CSU delegate to the Federal Convention for the purpose of electing the President of Germany in 2017.", "title": "Life after politics" }, { "paragraph_id": 18, "text": "In his capacity as Minister-President, Stoiber made 58 foreign trips, including to China (1995, 2003), Israel (2001), Egypt (2001), India (2004, 2007) and South Korea (2007).", "title": "Political positions" }, { "paragraph_id": 19, "text": "In 2002, Stoiber publicly expressed support for the United States in their policy toward Iraq. During his election campaign, he made clear his opposition to war, and his support for the introduction of weapons inspectors to Iraq without preconditions as a way of avoiding war, and he criticized Schröder for harming the German-American alliance by not calling President George W. Bush and discussing the issue privately. He also attacked German Foreign Minister Joschka Fischer for his criticism of the U.S. position.", "title": "Political positions" }, { "paragraph_id": 20, "text": "Stoiber is known for backing Vladimir Putin and there have been comparisons to Gerhard Schröder. One author called Stoiber a \"Moscow's Trojan Horse\". Putin is known to have given Stoiber \"extreme forms of flattery\" and privileges such as a private dinner at Putin's residence outside Moscow.", "title": "Political positions" }, { "paragraph_id": 21, "text": "Stoiber has been said to be skeptical of Germany's decision to adopt the euro. In 1997, he joined the ministers-president of two other German states, Kurt Biedenkopf and Gerhard Schröder, in making the case for a five-year delay in Europe's currency union. When the European Commission recommended that Greece be allowed to join the eurozone in 1998, he demanded that the country be barred from adopting the common currency for several years instead. He is a staunch opponent of Turkey's integration into the European Union, claiming that its non-Christian culture would dilute the Union.", "title": "Political positions" }, { "paragraph_id": 22, "text": "At the same time, Stoiber has repeatedly insisted he is a \"good European\" who is keen, for instance, on forging an EU-wide foreign policy, replete with a single European army. Earlier, in 1993, he had told German newspapers: \"I want a simple confederation. That means the nation-states maintain their dominant role, at least as far as internal matters are concerned.\"", "title": "Political positions" }, { "paragraph_id": 23, "text": "While the conservative wing of the German political spectrum, primarily formed of the CDU and CSU, enjoys considerable support, this support tends to be less extended to Stoiber. He enjoys considerably more support in his home state of Bavaria than in the rest of Germany, where CDU chairwoman Angela Merkel is more popular. This has its reasons: Merkel supports a kind of fiscal conservatism, but a more liberal social policy. Stoiber, on the other hand, favors a more conservative approach to both fiscal and social matters, and while this ensures him the religious vote, strongest in Bavaria, it has weakened his support at the national level.", "title": "Political positions" }, { "paragraph_id": 24, "text": "In 2005, Stoiber successfully lobbied Novartis, the Swiss pharmaceuticals group, to move the headquarters of its Sandoz subsidiary to Munich, making it one of Europe's highest-profile corporate relocations that year as well as a significant boost to Stoiber's attempts to build up Bavaria as a pharmaceuticals and biotechnology center.", "title": "Political positions" }, { "paragraph_id": 25, "text": "During his time as Minister-President of Bavaria, Stoiber pushed for the construction of a roughly 40-kilometer high-speed magnetic-levitation link from Munich's main station to its airport, to be built by Transrapid International, a consortium including ThyssenKrupp and Munich-based Siemens. After he left office, the German federal government abandoned the plans in 2008 because of spiraling costs of as much as €3.4 billion.", "title": "Political positions" }, { "paragraph_id": 26, "text": "Stoiber, as a minister in the state of Bavaria, was widely known for advocating a reduction in the number of asylum seekers Germany accepts, something that prompted critics to label him xenophobic, anti-Turkish and anti-Islam. In the late 1990s, he criticized the incoming Chancellor Gerhard Schröder for saying that he would work hard in the interest of Germans and people living in Germany. Stoiber's remarks drew heavy criticism in the press.", "title": "Political positions" }, { "paragraph_id": 27, "text": "When Germany's Federal Constitutional Court decided in 1995 that a Bavarian law requiring a crucifix to be hung in each of the state's 40,000 classrooms was unconstitutional, Stoiber said he would not order the removal of crucifixes \"for the time being\", and asserted that he was under no obligation to remove them in schools where parents unanimously opposed such action.", "title": "Political positions" }, { "paragraph_id": 28, "text": "During his 2002 election campaign, Stoiber indicated he would not ban same-sex marriages—sanctioned by the Schröder government—a policy he had vehemently objected to when it was introduced.", "title": "Political positions" }, { "paragraph_id": 29, "text": "Stoiber has been a staunch advocate of changes in German law that would give more power to owners of private TV channels. In 1995, he publicly called for the abolition of Germany's public television service ARD and a streamlining of its regional services, adding that he and Minister-President Kurt Biedenkopf of Saxony would break the contract ARD has with regional governments if reforms were not undertaken. However, when European Commissioner for Competition Karel van Miert unveiled ideas for reforming the rules governing the financing of public service broadcasters in 1998, Stoiber led the way in rejecting moves to reform established practice.", "title": "Political positions" }, { "paragraph_id": 30, "text": "During the run-up to the German general election in 2005, which was held ahead of schedule, Stoiber created controversy through a campaign speech held in the beginning of August 2005 in the federal state of Baden-Württemberg. He said, \"I do not accept that the East [of Germany] will again decide who will be Germany's chancellor. It cannot be allowed that the frustrated determine Germany's fate.\" People in the new federal states of Germany (the former German Democratic Republic) were offended by Stoiber's remarks. While the CSU attempted to portray them as \"misinterpreted\", Stoiber created further controversy when he claimed that \"if it was like Bavaria everywhere, there wouldn't be any problems. But unfortunately, ladies and gentlemen, we have everywhere some sections of the populace not as intelligent as in Bavaria.\" The tone of the comments was exacerbated by a perception by some within Germany of the state of Bavaria as \"arrogant\".", "title": "Controversies" }, { "paragraph_id": 31, "text": "Many, including members of the CDU, attribute Stoiber's comments and behavior as a contributing factor to the CDU's losses in the 2005 general election. He was accused by many in the CDU/CSU of offering \"half-hearted\" support to Angela Merkel, with some even accusing him of being reluctant to support a female candidate from the East. (This also contrasted unfavorably with Merkel's robust support for his candidacy in the 2002 election.) He has insinuated that votes were lost because of the choice of a female candidate. He came under heavy fire for these comments from press and politicians alike, especially since he himself lost almost 10% of the Bavarian vote—a dubious feat in itself as Bavarians tend to consistently vote conservatively. Nonetheless, a poll has suggested over 9% may have voted differently if the conservative candidate was a man from the West, although this does not clearly show if such a candidate would have gained or lost votes for the conservatives.", "title": "Controversies" }, { "paragraph_id": 32, "text": "When the Croatian National Bank turned down BayernLB's original bid to take over the local arm of Hypo Alpe-Adria-Bank International, this drew strong criticism from Stoiber, who said the decision was \"unacceptable\" and a \"severe strain\" for Bavaria's relations with Croatia. Croatia was seeking to join the European Union at the time. The central bank's board later reviewed and accepted BayernLB's offer of 1.6 billion euros. The investment in Hypo Group Alpe Adria was part of a series of ill-fated investments, which later forced BayernLB to take a 10 billion-euro bailout in the financial crisis.", "title": "Controversies" }, { "paragraph_id": 33, "text": "In September 2015, Emily O'Reilly, the European Ombudsman, received a complaint from two NGOs, Corporate Europe Observatory and Friends of the Earth, according to which Stoiber's appointment as special adviser on the commission's better regulation agenda broke internal rules on appointments.", "title": "Controversies" }, { "paragraph_id": 34, "text": "Stoiber is Roman Catholic. He is married to Karin Stoiber. They have three children: Constanze (born 1971, married Hausmann), Veronica (born 1977, married Saß), Dominic (born 1980) and five grandchildren: Johannes (1999), Benedikt (2001), Theresa Marie (2005), Ferdinand (2009) and another grandson (2011).", "title": "Personal life" }, { "paragraph_id": 35, "text": "Stoiber is a keen football fan and operative. In his youth, he played for local football side BCF Wolfratshausen. Stoiber serves as Member of the Supervisory Board of FC Bayern München AG (the stock corporation that runs the professional football section) and Chairman of the Administrative Advisory Board of FC Bayern Munich e.V. (the club that owns the majority of the club corporation).", "title": "Personal life" }, { "paragraph_id": 36, "text": "Before the 2002 election, FC Bayern general manager Uli Hoeneß expressed his support for Stoiber and the CSU. Football legend, former FC Bayern president and DFB vice president Franz Beckenbauer showed his support for Stoiber by letting him join the Germany national football team on their flight home from Japan after the 2002 FIFA World Cup.", "title": "Personal life" } ]
Edmund Rüdiger Stoiber is a German politician who served as the 16th Minister President of the state of Bavaria between 1993 and 2007 and chairman of the Christian Social Union (CSU) between 1999 and 2007. In 2002, he ran for the office of Chancellor of Germany in the federal election, but in one of the narrowest elections in German history lost against Gerhard Schröder. On 18 January 2007, he announced his decision to step down from the posts of minister-president and party chairman by 30 September, after having been under fire in his own party for weeks.
2001-07-24T10:54:28Z
2023-12-31T08:13:25Z
[ "Template:S-ppo", "Template:Navboxes", "Template:ISBN", "Template:Cite news", "Template:Commons and category", "Template:Wikiquote", "Template:In lang", "Template:S-off", "Template:S-bef", "Template:Authority control", "Template:BLP sources", "Template:Use dmy dates", "Template:S-end", "Template:Short description", "Template:Lang-de", "Template:Reflist", "Template:Cite web", "Template:Webarchive", "Template:Official website", "Template:C-SPAN", "Template:S-start", "Template:Infobox officeholder", "Template:Nbsp", "Template:FC Bayern Munich board", "Template:S-ttl", "Template:S-aft" ]
https://en.wikipedia.org/wiki/Edmund_Stoiber
9,481
Erfurt
Erfurt (German pronunciation: [ˈɛʁfʊʁt] ) is the capital and largest city of the Central German state of Thuringia. It is in the wide valley of the River Gera, in the southern part of the Thuringian Basin, north of the Thuringian Forest, and in the middle of a line of the six largest Thuringian cities (Thüringer Städtekette), stretching from Eisenach in the west, via Gotha, Erfurt, Weimar and Jena, to Gera in the east, close to the geographic centre of Germany. Erfurt is 100 km (62 mi) south-west of Leipzig, 250 km (155 mi) north-east of Frankfurt, 300 km (186 mi) south-west of Berlin and 400 km (249 mi) north of Munich. Erfurt's old town is one of the best preserved medieval city centres in Germany. Tourist attractions include the Merchants' Bridge (Krämerbrücke), the Old Synagogue (Alte Synagoge), the oldest in Europe and a UNESCO World Heritage Site, Cathedral Hill (Domberg) with the ensemble of Erfurt Cathedral (Erfurter Dom) and St Severus' Church (Severikirche) and Petersberg Citadel (Zitadelle Petersberg), one of the largest and best preserved town fortresses in Central Europe. The city's economy is based on agriculture, horticulture and microelectronics. Its central location has made it a logistics hub for Germany and central Europe. Erfurt hosts the second-largest trade fair in eastern Germany (after Leipzig), as well as the public television children's channel KiKa. The city is on the Via Regia, a medieval trade and pilgrims' road network. Modern Erfurt is also a hub for ICE high speed trains and other German and European transport networks. Erfurt was first mentioned in 742, as Saint Boniface founded the diocese. Although the town did not belong to any of the Thuringian states politically, it quickly became the economic centre of the region and was a member of the Hanseatic League. It was part of the Electorate of Mainz during the Holy Roman Empire, and became part of the Kingdom of Prussia in 1802. From 1949 until 1990 Erfurt was part of the German Democratic Republic (East Germany). The University of Erfurt was founded in 1379, making it the first university to be established within the geographic area which constitutes modern Germany. It closed in 1816 and was re-established in 1994. Martin Luther (1483–1546) was its most famous student, studying there from 1501 before entering St Augustine's Monastery in 1505. Other noted Erfurters include the medieval philosopher and mystic Meister Eckhart (c. 1260–1328), the Baroque composer Johann Pachelbel (1653–1706) and the sociologist Max Weber (1864–1920). Erfurt is an old Germanic settlement. The earliest evidence of human settlement dates from the prehistoric era; archaeological finds from the north of Erfurt revealed human traces from the paleolithic period, ca. 10,000 BCE. To the west of Erfurt in Frienstedt existed, in the AD era, a big Germanic village, which was found during the construction of a highway. Where they also discovered the oldest Germanic word ever discovered in Central Germany written in runic script was found on a comb from a sacrificial shaft the word: "kaba". From Roman Times, however, they found 200 coins dating back to the third century, plus 150 Roman ceramic fragments and more than 200 fibulae. Also 11 inhumation graves of the Haßleben-Leuna group, which is an archeological cultural group. The Melchendorf dig in the southern city part showed a settlement from the neolithic period. The Thuringii inhabited the Erfurt area in c. 480 and gave their name to Thuringia in c. 500. The town is first mentioned in 742 under the name of "Erphesfurt": in that year, Saint Boniface wrote to Pope Zachary to inform him that he had established three dioceses in central Germany, one of them "in a place called Erphesfurt, which for a long time has been inhabited by pagan natives." All three dioceses (the other two were Würzburg and Büraburg) were confirmed by Zachary the next year, though in 755 Erfurt was brought into the diocese of Mainz. That the place was populous already is borne out by archeological evidence, which includes 23 graves and six horse burials from the sixth and seventh centuries. Throughout the Middle Ages, Erfurt was an important trading town because of its location, near a ford across the Gera river. Together with the other five Thuringian woad towns of Gotha, Tennstedt, Arnstadt and Langensalza it was the centre of the German woad trade, which made those cities very wealthy. Erfurt was the junction of important trade routes: the Via Regia was one of the most used east–west roads between France and Russia (via Frankfurt, Erfurt, Leipzig and Wrocław) and another route in the north–south direction was the connection between the Baltic Sea ports (e. g. Lübeck) and the potent upper Italian city-states like Venice and Milan. During the tenth and eleventh centuries both the Emperor and the Electorate of Mainz held some privileges in Erfurt. The German kings had an important monastery on Petersberg hill and the Archbishops of Mainz collected taxes from the people. Around 1100, some people became free citizens by paying the annual "Freizins" (liberation tax), which marks a first step in becoming an independent city. During the 12th century, as a sign of more and more independence, the citizens built a city wall around Erfurt (in the area of today's Juri-Gagarin-Ring). After 1200, independence was fulfilled and a city council was founded in 1217; the town hall was built in 1275. In the following decades, the council bought a city-owned territory around Erfurt which consisted at its height of nearly 100 villages and castles and even another small town (Sömmerda). Erfurt became an important regional power between the Landgraviate of Thuringia around, the Electorate of Mainz to the west and the Electorate of Saxony to the east. Between 1306 and 1481, Erfurt was allied with the two other major Thuringian cities (Mühlhausen and Nordhausen) in the Thuringian City Alliance and the three cities joined the Hanseatic League together in 1430. A peak in economic development was reached in the 15th century, when the city had a population of 20,000 making it one of the largest in Germany. Between 1432 and 1446, a second and higher city wall was established. In 1483, a first city fortress was built on Cyriaksburg hill in the southwestern part of the town. In the year 1184, Erfurt was the location of a notable accident called the Erfurter Latrinensturz ('Erfurt latrine fall'). King Henry VI held council in a building of the Erfurt Cathedral to negotiate peace between two of his vassals, Archbishop Konrad I of Mainz and Landgrave Ludwig III of Thuringia. The amassed weight of all the gathered men proved too heavy for the floor to bear, which collapsed. According to contemporary accounts, dozens of people fell to their death into the latrine pit below. Ludwig III, Konrad I and Henry VI survived the affair. The Jewish community of Erfurt was founded in the 11th century and became, together with Mainz, Worms and Speyer, one of the most influential in Germany. The Old Synagogue is still extant and is a museum today, as is the mikveh at Gera river near Krämerbrücke. In 1349, during the wave of Black Death Jewish persecutions across Europe, the Jews of Erfurt were rounded up, with more than 100 killed and the rest driven from the city. Before the persecution, a wealthy Jewish merchant buried his property in the basement of his house. In 1998, this treasure was found during construction works. The Erfurt Treasure with various gold and silver objects is shown in the exhibition in the synagogue today. Only a few years after 1349, the Jews moved back to Erfurt and founded a second community, which was disbanded by the city council in 1458. Because of their exceptional testimony to the life of a medieval Jewish community, the Jewish sites in Erfurt were inscribed on the UNESCO World Heritage List in 2023. In 1379, the University of Erfurt was founded. Together with the University of Cologne it was one of the first city-owned universities in Germany, while they were usually owned by the Landesherren. Some buildings of this old university are extant or restored in the "Latin Quarter" in the northern city centre (like Collegium Maius, student dorms "Georgenburse" and others, the hospital and the church of the university). The university quickly became a hotspot of German cultural life in Renaissance humanism with scholars like Ulrich von Hutten, Helius Eobanus Hessus and Justus Jonas. In 1501 Martin Luther (1483–1546) moved to Erfurt and began his studies at the university. After 1505, he lived at St. Augustine's Monastery as a friar. In 1507 he was ordained as a priest in Erfurt Cathedral. He moved permanently to Wittenberg in 1511. Erfurt was an early adopter of the Protestant Reformation, in 1521. In 1530, the city became one of the first in Europe to be officially bi-confessional with the Hammelburg Treaty. It kept that status through all the following centuries. The later 16th and the 17th century brought a slow economic decline of Erfurt. Trade shrank, the population was falling and the university lost its influence. The city's independence was endangered. In 1664, the city and surrounding area were brought under the dominion of the Electorate of Mainz and the city lost its independence. The Electorate built a huge fortress on Petersberg hill between 1665 and 1726 to control the city and instituted a governor to rule Erfurt. In 1682 and 1683 Erfurt experienced the worst plague years in its history. In 1683 more than half of the population died because of the deadly disease. In Erfurt witch-hunts are known from 1526 to 1705. Trial records are only incomplete. Twenty people were involved in witch trials and at least eight people died. During the late 18th century, Erfurt saw another cultural peak. Governor Karl Theodor Anton Maria von Dalberg had close relations with Johann Wolfgang von Goethe, Friedrich Schiller, Johann Gottfried Herder, Christoph Martin Wieland and Wilhelm von Humboldt, who often visited him at his court in Erfurt. Erfurt became part of the Kingdom of Prussia in 1802, to compensate for territories Prussia lost to France on the Left Bank of the Rhine. In the Capitulation of Erfurt, the city, its 12,000 Prussian and Saxon defenders under William VI, Prince of Orange-Nassau, 65 artillery pieces, and the Petersberg Citadel and Cyriaksburg Citadel Cyriaksburg, were handed over to the French on 16 October 1806. At the time of the capitulation, Joachim Murat, Marshal of France, had about 16,000 troops near Erfurt. With the attachment of the Saxe-Weimar territory of Blankenhain, the city became part of the First French Empire in 1806 as the Principality of Erfurt, directly subordinate to Napoleon as an "imperial state domain" (French: domaine réservé à l'empereur), separate from the Confederation of the Rhine, which the surrounding Thuringian states had joined. Erfurt was administered by a civilian and military Senate (Finanz- und Domänenkammer Erfurt) under a French governor, based in the Kurmainzische Statthalterei, previously the seat of the city's governor under the Electorate. Napoleon first visited the principality on 23 July 1807, inspecting the citadels and fortifications. In 1808, the Congress of Erfurt was held with Napoleon and Alexander I of Russia visiting the city. During their administration, the French introduced street lighting and a tax on foreign horses to pay for maintaining the road surface. The Peterskirche suffered under the French occupation, with its inventory being auctioned off to other local churches – including the organ, bells and even the tower of the Corpus Christi chapel (Fronleichnamskapelle) – and the former monastery's library being donated to the University of Erfurt (and then to the Boineburg Library when the university closed in 1816). Similarly the Cyriaksburg Citadel was damaged by the French, with the city-side walls being partially dismantled in the hunt for imagined treasures from the convent, workers being paid from the sale of the building materials. In 1811, to commemorate the birth of the Prince Imperial, a 70-foot (21-metre) ceremonial column (Die Napoleonsäule) of wood and plaster was erected on the common. Similarly, the Napoleonshöhe – a Greek-style temple topped by a winged victory with shield, sword and lance and containing a bust of Napoleon sculpted by Friedrich Döll – was erected in the Stiegerwald woods, including a grotto with fountain and flower beds, using a large pond (lavoratorium) from the Peterskirche, inaugurated with ceremony on 14 August 1811 after extravagant celebrations for Napoleon's birthday, which were repeated in 1812 with a concert in the Predigerkirche conducted by Louis Spohr. With the Sixth Coalition forming after French defeat in Russia, on 24 February 1813 Napoleon ordered the Petersburg Citadel to prepare for siege, visiting the city on 25 April to inspect the fortifications, in particular both Citadels. On 10 July 1813, Napoleon put Alexandre d'Alton [fr], baron of the Empire, in charge of the defences of Erfurt. However, when the French decreed that 1000 men would be conscripted into the Grande Armée, the recruits were joined by other citizens in rioting on 19 July that led to 20 arrests, of whom 2 were sentenced to death by French court-martial; as a result, the French ordered the closure of all inns and alehouses. Within a week of the Sixth Coalition's decisive victory at Leipzig (16–19 October 1813), however, Erfurt was besieged by Prussian, Austrian and Russian troops under the command of Prussian Lt Gen von Kleist. After a first capitulation signed by d'Alton on 20 December 1813 the French troops withdrew to the two fortresses of Petersberg and Cyriaksburg, allowing for the Coalition forces to march into Erfurt on 6 January 1814 to jubilant greetings; the Napoleonsäule ceremonial column was burned and destroyed as a symbol of the citizens' oppression under the French; similarly the Napoleonshöhe was burned on 1 November 1813 and completely destroyed by Erfurters and their besiegers in 1814. After a call for volunteers 3 days later, 300 Erfurters joined the Coalition armies in France. Finally, in May 1814, the French capitulated fully, with 1,700 French troops vacating the Petersberg and Cyriaksburg fortresses. During the two and a half months of siege, the mortality rate rose in the city greatly; 1,564 Erfurt citizens died in 1813, around a thousand more than the previous year. After the Congress of Vienna, Erfurt was restored to Prussia on 21 June 1815, becoming the capital of one of the three districts (Regierungsbezirke) of the new Province of Saxony, but some southern and eastern parts of Erfurter lands joined Blankenhain in being transferred to the Grand Duchy of Saxe-Weimar-Eisenach the following September. Although enclosed by Thuringian territory in the west, south and east, the city remained part of the Prussian Province of Saxony until 1944. After the 1848 Revolution, many Germans desired to have a united national state. An attempt in this direction was the failed Erfurt Union of German states in 1850. The Industrial Revolution reached Erfurt in the 1840s, when the Thuringian Railway connecting Berlin and Frankfurt was built. During the following years, many factories in different sectors were founded. One of the biggest was the "Royal Gun Factory of Prussia" in 1862. After the Unification of Germany in 1871, Erfurt moved from the southern border of Prussia to the centre of Germany, so the fortifications of the city were no longer needed. The demolition of the city fortifications in 1873 led to a construction boom in Erfurt, because it was now possible to build in the area formerly occupied by the city walls and beyond. Many public and private buildings emerged and the infrastructure (such as a tramway, hospitals, and schools) improved rapidly. The number of inhabitants grew from 40,000 around 1870 to 130,000 in 1914 and the city expanded in all directions. The "Erfurt Program" was adopted by the Social Democratic Party of Germany during its congress at Erfurt in 1891. Between the wars, the city kept growing. Housing shortages were fought with building programmes and social infrastructure was broadened according to the welfare policy in the Weimar Republic. The Great Depression between 1929 and 1932 led to a disaster for Erfurt, nearly one out of three became unemployed. Conflicts between far-left and far-right-oriented milieus increased and many inhabitants supported the new Nazi government and Adolf Hitler. Others, especially some communist workers, put up resistance against the new administration. In 1938, the new synagogue was destroyed during the Kristallnacht. Jews lost their property and emigrated or were deported to Nazi concentration camps (together with many communists). In 1914, the company Topf and Sons began the manufacture of crematoria later becoming the market leader in this industry. Under the Nazis, JA Topf & Sons supplied specially developed crematoria, ovens and associated plants to the Auschwitz-Birkenau, Buchenwald and Mauthausen-Gusen concentration camps. On 27 January 2011 a memorial and museum dedicated to the Holocaust victims was opened at the former company premises in Erfurt. During World War II, Erfurt experienced more than 27 British and American air raids, about 1600 civilians died. Bombed as a target of the Oil Campaign of World War II, Erfurt suffered only limited damage and was captured on 12 April 1945, by the US 80th Infantry Division. On 3 July, American troops left the city, which then became part of the Soviet Zone of Occupation and eventually of the German Democratic Republic (East Germany). In 1948, Erfurt became the capital of Thuringia, replacing Weimar. In 1952, the Länder in the GDR were dissolved in favour of centralization under the new socialist government. Erfurt then became the capital of a new "Bezirk" (district). In 1953, the Hochschule of education was founded, followed by the Hochschule of medicine in 1954, the first academic institutions in Erfurt since the closing of the university in 1816. On 19 March 1970, the East and West German heads of government Willi Stoph and Willy Brandt met in Erfurt, the first such meeting since the division of Germany. During the 1970s and 1980s, as the economic situation in GDR worsened, many old buildings in city centre decayed, while the government fought against the housing shortage by building large Plattenbau settlements in the periphery. The Peaceful Revolution of 1989/1990 led to German reunification. With the re-formation of the state of Thuringia in 1990, the city became the state capital. After reunification, a deep economic crisis occurred in Eastern Germany. Many factories closed and many people lost their jobs and moved to the former West Germany. At the same time, many buildings were redeveloped and the infrastructure improved massively. In 1994, the new university was opened, as was the Fachhochschule in 1991. Between 2005 and 2008, the economic situation improved as the unemployment rate decreased and new enterprises developed. In addition, the population began to increase once again. A school shooting occurred on 26 April 2002 at the Gutenberg-Gymnasium. Since the 1990s, organized crime has gained a foothold in Erfurt, with several mafia groups, including the Armenian mafia present in the city. Among other events, there has been a robbery and an arson attack targeting the gastronomy sector and in 2014 there was a shoot-out in an open street. Erfurt is situated in the south of the Thuringian basin, a fertile agricultural area between the Harz mountains 80 km (50 mi) to the north and the Thuringian Forest 30 km (19 mi) to the southwest. Whereas the northern parts of the city area are flat, the southern ones consist of hilly landscape up to 430 m of elevation. In this part lies the municipal forest of Steigerwald with beeches and oaks as main tree species. To the east and to the west are some non-forested hills so that the Gera river valley within the town forms a basin. North of the city are some gravel pits in operation, while others are abandoned, flooded and used as leisure areas. Erfurt has a humid continental climate (Dfb) or an oceanic climate (Cfb) according to the Köppen climate classification system. Summers are warm and sometimes humid with average high temperatures of 23 °C (73 °F) and lows of 12 °C (54 °F). Winters are relatively cold with average high temperatures of 2 °C (36 °F) and lows of −3 °C (27 °F). The city's topography creates a microclimate caused by the location inside a basin with sometimes inversion in winter (quite cold nights under −20 °C (−4 °F)) and inadequate air circulation in summer. Annual precipitation is only 502 millimeters (19.8 in) with moderate rainfall throughout the year. Light snowfall mainly occurs from December through February, but snow cover does not usually remain for long. Erfurt abuts the districts of Sömmerda (municipalities Witterda, Elxleben, Walschleben, Riethnordhausen, Nöda, Alperstedt, Großrudestedt, Udestedt, Kleinmölsen and Großmölsen) in the north, Weimarer Land (municipalities Niederzimmern, Nohra, Mönchenholzhausen and Klettbach) in the east, Ilm-Kreis (municipalities Kirchheim, Rockhausen and Amt Wachsenburg) in the south and Gotha (municipalities Nesse-Apfelstädt, Nottleben, Zimmernsupra and Bienstädt) in the west. The city itself is divided into 53 districts. The centre is formed by the district Altstadt (old town) and the Gründerzeit districts Andreasvorstadt in the northwest, Johannesvorstadt in the northeast, Krämpfervorstadt in the east, Daberstedt in the southeast, Löbervorstadt in the southwest and Brühlervorstadt in the west. More former industrial districts are Ilversgehofen (incorporated in 1911), Hohenwinden and Sulzer Siedlung in the north. Another group of districts is marked by Plattenbau settlements, constructed during the DDR period: Berliner Platz, Moskauer Platz, Rieth, Roter Berg and Johannesplatz in the northern as well as Melchendorf, Wiesenhügel and Herrenberg in the southern city parts. Finally, there are many villages with an average population of approximately 1,000 which were incorporated during the 20th century; however, they have mostly stayed rural to date: Erfurt-Southeast (German: Erfurt-Südost) is the collective name for a series of prefabricated housing areas that emerged in the south-east of Erfurt in the last ten years of the GDR. The districts of Melchendorf , Herrenberg and Wiesen Hügel belong to Erfurt-Südost , all of which were formed from the former local area of Melchendorf. The village of Melchendorf with around 1000 inhabitants lies between the prefabricated building areas. In addition to the old village, the district of Melchendorf also includes the prefab housing areas of Drosselberg and Buchenberg as well as several four-story apartment blocks from the 1950s and 1960s on Kranichfelder Strasse. Around 24,000 people still live in the large settlement, which was once designed for almost 40,000 inhabitants. In addition to Erfurt-Nord, Erfurt-Südost is the second large prefabricated building area in the state capital. The problems associated with large housing estates are not as pronounced in the Southeast as in the North, but they are still present. Erfurt-Südost is the collective name for a series of prefabricated housing areas that emerged in the south-east of Erfurt in the last ten years of the GDR. The districts of Melchendorf , Herrenberg and Wiesen Hügel belong to Erfurt-Südost , all of which were formed from the former local area of Melchendorf. The village of Melchendorf with around 1000 inhabitants lies between the prefabricated building areas. In addition to the old village, the district of Melchendorf also includes the prefab housing areas of Drosselberg and Buchenberg as well as several four-story apartment blocks from the 1950s and 1960s on Kranichfelder Strasse. Around 24,000 people still live in the large settlement, which was once designed for almost 40,000 inhabitants. In addition to Erfurt-Nord, Erfurt-Südost is the second large prefabricated building area in the state capital. The problems associated with large housing estates are not as pronounced in the Southeast as in the North, but they are still present. Around the year 1500, the city had 18,000 inhabitants and was one of the largest cities in the Holy Roman Empire. The population then more or less stagnated until the 19th century. The population of Erfurt was 21,000 in 1820, and increased to 32,000 in 1847, the year of rail connection as industrialization began. In the following decades Erfurt grew up to 130,000 at the beginning of World War I and 190,000 inhabitants in 1950. A maximum was reached in 1988 with 220,000 persons. In 1991, after the German reunification and when Erfurt became the capital of Thuringia state, it had a population of about 205,000. The bad economic situation in eastern Germany after the reunification resulted in a decline in population, which fell to 200,000 in 2002 before rising again to 206,000 in 2011. The average growth of population between 2009 and 2012 was approximately 0.68% p. a, whereas the population in bordering rural regions is shrinking with accelerating tendency. Suburbanization played only a small role in Erfurt. It occurred after reunification for a short time in the 1990s, but most of the suburban areas were situated within the administrative city borders. Erfurt is also the 10th largest city in Germany by area with area of 269.17 km (103.93 sq mi). The birth deficit was 200 in 2012, this is −1.0 per 1,000 inhabitants (Thuringian average: -4.5; national average: -2.4). The net migration rate was +8.3 per 1,000 inhabitants in 2012 (Thuringian average: -0.8; national average: +4.6). The most important regions of origin of Erfurt migrants are rural areas of Thuringia, Saxony-Anhalt and Saxony as well as foreign countries like Poland, Russia, Syria, Afghanistan and Hungary. Erfurt is today one of the popular cities in former East Germany due to its universities and broadcasting companies. Like other eastern German cities, foreigners account only for a small share of Erfurt's population: circa 3.0% are non-Germans by citizenship and overall 5.9% are migrants (according to the 2011 EU census). Due to the official atheism of the former GDR, most of the population is non-religious. 14.8% are members of the Evangelical Church in Central Germany and 6.8% are Catholics (according to the 2011 EU census). The Jewish Community consists of 500 members. Most of them migrated to Erfurt from Russia and Ukraine in the 1990s. The theologian, philosopher and mystic Meister Eckhart (c. 1260–1328) entered the Dominican monastery (Predigerkloster) in Erfurt when he was aged about 18 (around 1275). Eckhart was the Dominican prior at Erfurt from 1294 until 1298, and Vicar of Thuringia from 1298 to 1302. After a year in Paris, he returned to Erfurt in 1303 and administered his duties as Provincial of Saxony from there until 1311. Martin Luther (1483–1546) studied law and philosophy at the University of Erfurt from 1501. He lived in St Augustine's Monastery in Erfurt, as a friar from 1505 to 1511. Johann Pachelbel (1653–1706) served as organist at the Predigerkirche (Preachers Church) in Erfurt from June 1678 until August 1690. Pachelbel composed approximately seventy pieces for organ while in Erfurt. The city is the birthplace of one of Johann Sebastian Bach's cousins, Johann Bernhard Bach, as well as Johann Sebastian Bach's father Johann Ambrosius Bach. Bach's parents were married in 1668 in the Kaufmannskirche (Merchant's Church) that still exists on the main square of Anger. Alexander Müller (1808–1863), pianist, conductor and composer, was born in Erfurt. He later moved to Zürich where he served as leader of the General Music Society's subscription concerts series. Max Weber (1864–1920) was born in Erfurt. He was a sociologist, philosopher, lawyer, and political economist whose ideas have profoundly influenced modern social theory and social research. After 1906 the composer Richard Wetz (1875–1935) lived in Erfurt and became the leading person in the city's musical life. His major works were written here, including three symphonies, a Requiem and a Christmas Oratorio. The textile designer Margaretha Reichardt (1907–1984) was born and died in Erfurt. She studied at the Bauhaus from 1926 to 1930, and while there worked with Marcel Breuer on his innovative chair designs. Her former home and weaving workshop in Erfurt, the Margaretha Reichardt Haus, is now a museum, managed by the Angermuseum Erfurt. Famous contemporary musicians from Erfurt are Clueso, the Boogie Pimps and Yvonne Catterfeld. Erfurt has a great variety of museums: Since 2003, the modern opera house is home to Theater Erfurt and its Philharmonic Orchestra. The "grand stage" section has 800 seats and the "studio stage" can hold 200 spectators. In September 2005, the opera Waiting for the Barbarians by Philip Glass premiered in the opera house. The Erfurt Theatre has been a source of controversy. In 2005, a performance of Engelbert Humperdinck's opera Hänsel und Gretel stirred up the local press since the performance contained suggestions of pedophilia and incest. The opera was advertised in the programme with the addition "for adults only". On 12 April 2008, a version of Verdi's opera Un ballo in maschera directed by Johann Kresnik opened at the Erfurt Theatre. The production stirred deep controversy by featuring nude performers in Mickey Mouse masks dancing on the ruins of the World Trade Center and a female singer with a painted on Hitler toothbrush moustache performing a straight arm Nazi salute, along with sinister portrayals of American soldiers, Uncle Sam, and Elvis Presley impersonators. The director described the production as a populist critique of modern American society, aimed at showing up the disparities between rich and poor. The controversy prompted one local politician to call for locals to boycott the performances, but this was largely ignored and the première was sold out. The Messe Erfurt serves as home court for the Oettinger Rockets, a professional basketball team in Germany's first division, the Basketball Bundesliga. Notable types of sport in Erfurt are athletics, ice skating, cycling (with the oldest velodrome in use in the world, opened in 1885), swimming, handball, volleyball, tennis and football. The city's football club FC Rot-Weiß Erfurt is member of 3. Fußball-Liga and based in Steigerwaldstadion with a capacity of 20,000. The Gunda-Niemann-Stirnemann Halle was the second indoor speed skating arena in Germany. Erfurt's cityscape features a medieval core of narrow, curved alleys in the centre surrounded by a belt of Gründerzeit architecture, created between 1873 and 1914. In 1873, the city's fortifications were demolished and it became possible to build houses in the area in front of the former city walls. In the following years, Erfurt saw a construction boom. In the northern area (districts Andreasvorstadt, Johannesvorstadt and Ilversgehofen) tenements for the factory workers were built whilst the eastern area (Krämpfervorstadt and Daberstedt) featured apartments for white-collar workers and clerks and the southwestern part (Löbervorstadt and Brühlervorstadt) with its beautiful valley landscape saw the construction of villas and mansions of rich factory owners and notables. During the interwar period, some settlements in Bauhaus style were realized, often as housing cooperatives. After World War II and over the whole GDR period, housing shortages remained a problem even though the government started a big apartment construction programme. Between 1970 and 1990 large Plattenbau settlements with high-rise blocks on the northern (for 50,000 inhabitants) and southeastern (for 40,000 inhabitants) periphery were constructed. After reunification the renovation of old houses in city centre and the Gründerzeit areas was a big issue. The federal government granted substantial subsidies, so that many houses could be restored. Compared to many other German cities, little of Erfurt was destroyed in World War II. This is one reason why the centre today offers a mixture of medieval, Baroque and Neoclassical architecture as well as buildings from the last 150 years. Public green spaces are located along Gera river and in several parks like the Stadtpark, the Nordpark and the Südpark. The largest green area is the Egapark [de], a horticultural exhibition park and botanic garden established in 1961. The city centre has about 25 churches and monasteries, most of them in Gothic style, some also in Romanesque style or a mixture of Romanesque and Gothic elements, and a few in later styles. The various steeples characterize the medieval centre and led to one of Erfurt's nicknames as the "Thuringian Rome". The oldest parts of Erfurt's Alte Synagoge (Old Synagogue) date to the 11th century. It was used until 1349 when the Jewish community was destroyed in a pogrom known as the Erfurt Massacre. The building had many other uses since then. It was conserved in the 1990s and in 2009 it became a museum of Jewish history. A rare Mikveh, a ritual bath, dating from c.1250, was discovered by archeologists in 2007. It has been accessible to visitors on guided tours since September 2011. The Jewish heritage of Erfurt including the Old Synagogue and Mikveh became a UNESCO World Heritage Site in September 2023 and is the second Jewish heritage in Germany that is listed on UNESCO. As religious freedom was granted in the 19th century, some Jews returned to Erfurt. They built their synagogue on the banks of the Gera river and used it from 1840 until 1884. The neoclassical building is known as the Kleine Synagoge (Small Synagogue). Today it is used an events centre. It is also open to visitors. A larger synagogue, the Große Synagoge (Great Synagogue), was opened in 1884 because the community had become larger and wealthier. This moorish style building was destroyed during nationwide Nazi riots, known as Kristallnacht on 9–10 November 1938. In 1947 the land which the Great Synagogue had occupied was returned to the Jewish community and they built their current place of worship, the Neue Synagoge (New Synagogue) which opened in 1952. It was the only synagogue building erected under communist rule in East Germany. Besides the religious buildings there is a lot of historic secular architecture in Erfurt, mostly concentrated in the city centre, but some 19th- and 20th-century buildings are located on the outskirts. From 1066 until 1873 the old town of Erfurt was encircled by a fortified wall. About 1168 this was extended to run around the western side of Petersberg hill, enclosing it within the city boundaries. After German Unification in 1871, Erfurt became part of the newly created German Empire. The threat to the city from its Saxon neighbours and from Bavaria was no longer present, so it was decided to dismantle the city walls. Only a few remnants remain today. A piece of inner wall can be found in a small park at the corner Juri-Gagarin-Ring and Johannesstraße and another piece at the flood ditch (Flutgraben) near Franckestraße. There is also a small restored part of the wall in the Brühler Garten, behind the Catholic orphanage. Only one of the wall's fortified towers was left standing, on Boyneburgufer, but this was destroyed in an air raid in 1944. The Petersberg Citadel is one of the largest and best preserved city fortresses in Europe, covering an area of 36 hectares in the north-west of the city centre. It was built from 1665 on Petersberg hill and was in military use until 1963. Since 1990, it has been significantly restored and is now open to the public as an historic site. The Cyriaksburg Citadel [de] is a smaller citadel south-west of the city centre, dating from 1480. Today, it houses the German horticulture museum. Between 1873 and 1914, a belt of Gründerzeit architecture emerged around the city centre. The mansion district in the south-west around Cyriakstraße, Richard-Breslau-Straße and Hochheimer Straße hosts some interesting Gründerzeit and Art Nouveau buildings. The "Mühlenviertel" ("mill quarter"), is an area of beautiful Art Nouveau apartment buildings, cobblestone streets and street trees just to the north of the old city, in the vicinity of Nord Park, bordered by the Gera river on its east side. The Schmale Gera stream runs through the area. In the Middle Ages numerous small enterprises using the power of water mills occupied the area, hence the name "Mühlenviertel", with street names such as Waidmühlenweg (woad, or indigo, mill way), Storchmühlenweg (stork mill way) and Papiermühlenweg (paper mill way). The Bauhaus style is represented by some housing cooperative projects in the east around Flensburger Straße and Dortmunder Straße and in the north around Neuendorfstraße. Lutherkirke Church in Magdeburger Allee (1927), is an Art Deco building. The former malt factory "Wolff" at Theo-Neubauer-Straße in the east of Erfurt is a large industrial complex built between 1880 and 1939, and in use until 2000. A new use has not been found yet, but the area is sometimes used as a location in movie productions because of its atmosphere. Examples of Nazi architecture include the buildings of the Landtag (Thuringian parliament) and Thüringenhalle (an event hall) in the south at Arnstädter Straße. While the Landtag building (1930s) represents more the neo-Roman/fascist style, Thüringenhalle (1940s) is marked by some neo-Germanic Heimatschutz style elements. The Stalinist early-GDR style is manifested in the main building of the university at Nordhäuser Straße (1953) and the later more international modern GDR style is represented by the horticultural exhibition centre "Egapark" at Gothaer Straße, the Plattenbau housing complexes like Rieth or Johannesplatz and the redevelopment of Löbertor and Krämpfertor area along Juri-Gagarin-Ring in the city centre. The current international glass and steel architecture is dominant among most larger new buildings like the Federal Labour Court of Germany (1999), the new opera house (2003), the new main station (2007), the university library, the Erfurt Messe (convention centre) and the Gunda Niemann-Stirnemann ice rink. During recent years, the economic situation of the city improved: the unemployment rate declined from 21% in 2005 to 9% in 2013. Nevertheless, some 14,000 households with 24,500 persons (12% of population) are dependent upon state social benefits (Hartz IV). Farming has a great tradition in Erfurt: the cultivation of woad made the city rich during the Middle Ages. Today, horticulture and the production of flower seeds is still an important business in Erfurt. There is also growing of fruits (like apples, strawberries and sweet cherries), vegetables (e.g. cauliflowers, potatoes, cabbage and sugar beets) and grain on more than 60% of the municipal territory. Industrialization in Erfurt started around 1850. Until World War I, many factories were founded in different sectors like engine building, shoes, guns, malt and later electro-technics, so that there was no industrial monoculture in the city. After 1945, the companies were nationalized by the GDR government, which led to the decline of some of them. After reunification, nearly all factories were closed, either because they failed to successfully adopt to a free market economy or because the German government sold them to west German businessmen who closed them to avoid competition to their own enterprises. However, in the early 1990s the federal government started to subsidize the foundation of new companies. It still took a long time before the economic situation stabilized around 2006. Since this time, unemployment has decreased and overall, new jobs were created. Today, there are many small and medium-sized companies in Erfurt with electro-technics, semiconductors and photovoltaics in focus. Engine production, food production, the Braugold brewery, and Born Feinkost, a producer of Thuringian mustard, remain important industries. Erfurt is an Oberzentrum (which means "supra-centre" according to Central place theory) in German regional planning. Such centres are always hubs of service businesses and public services like hospitals, universities, research, trade fairs, retail etc. Additionally, Erfurt is the capital of the federal state of Thuringia, so that there are many institutions of administration like all the Thuringian state ministries and some nationwide authorities. Typical for Erfurt are the logistic business with many distribution centres of big companies, the Erfurt Trade Fair and the media sector with KiKa and MDR as public broadcast stations. A growing industry is tourism, due to the various historical sights of Erfurt. There are 4,800 hotel beds and (in 2012) 450,000 overnight visitors spent a total of 700,000 nights in hotels. Nevertheless, most tourists are one-day visitors from Germany. The Christmas Market in December attracts some 2,000,000 visitors each year. The ICE railway network puts Erfurt 11⁄2 hours from Berlin, 21⁄2 hours from Frankfurt, 2 hours from Dresden, and 45 minutes from Leipzig. In 2017, the ICE line to Munich opened, making the trip to Erfurt main station only 21⁄2 hours. There are regional trains from Erfurt to Weimar, Jena, Gotha, Eisenach, Bad Langensalza, Magdeburg, Nordhausen, Göttingen, Mühlhausen, Würzburg, Meiningen, Ilmenau, Arnstadt, and Gera. In freight transport there is an intermodal terminal in the district of Vieselbach (Güterverkehrszentrum, GVZ) with connections to rail and the autobahn. The two Autobahnen crossing each other nearby at Erfurter Kreuz are the Bundesautobahn 4 (Frankfurt–Dresden) and the Bundesautobahn 71 (Schweinfurt–Sangerhausen). Together with the east tangent both motorways form a circle road around the city and lead the interregional traffic around the centre. Whereas the A 4 was built in the 1930s, the A 71 came into being after the reunification in the 1990s and 2000s. In addition to both motorways there are two Bundesstraßen: the Bundesstraße 7 connects Erfurt parallel to A 4 with Gotha in the west and Weimar in the east. The Bundesstraße 4 is a connection between Erfurt and Nordhausen in the north. Its southern part to Coburg was annulled when A 71 was finished (in this section, the A 71 now effectively serves as B 4). Within the circle road, B 7 and B 4 are also annulled, so that the city government has to pay for maintenance instead of the German federal government. The access to the city is restricted as Umweltzone since 2012 for some vehicles. Large parts of the inner city are a pedestrian area which can not be reached by car (except for residents). The Erfurt public transport system is marked by the area-wide Erfurt Stadtbahn (light rail) network, established as a tram system in 1883, upgraded to a light rail (Stadtbahn) system in 1997, and continually expanded and upgraded through the 2000s. Today, there are six Stadtbahn lines running every ten minutes on every light rail route. Additionally, Erfurt operates a bus system, which connects the sparsely populated outer districts of the region to the city center. Both systems are organized by SWE EVAG, a transit company owned by the city administration. Trolleybuses were in service in Erfurt from 1948 until 1975, but are no longer in service. Erfurt-Weimar Airport lies 3 km (2 mi) west of the city centre. It is linked to the central train station via Stadtbahn (tram). It was significantly extended in the 1990s, with flights mostly to Mediterranean holiday destinations and to London during the peak Christmas market tourist season. Connections to longer haul flights are easily accessible via Frankfurt Airport, which can be reached in 2 hours via a direct train from Frankfurt Airport to Erfurt, and from Leipzig/Halle Airport, which can be reached within half an hour. Biking is becoming increasingly popular since construction of high quality cycle tracks began in the 1990s. There are cycle lanes for general commuting within Erfurt city. Long-distance trails, such as the Gera track and the Radweg Thüringer Städtekette (Thuringian cities trail), connect points of tourist interest. The former runs along the Gera river valley from the Thuringian Forest to the river Unstrut; the latter follows the medieval Via Regia from Eisenach to Altenburg via Gotha, Erfurt, Weimar, and Jena. The Rennsteig Cycle Way was opened in 2000. This designated high-grade hiking and bike trail runs along the ridge of the Thuringian Central Uplands. The bike trail, about 200 km (124 mi) long, occasionally departs from the course of the historic Rennsteig hiking trail, which dates back to the 1300s, to avoid steep inclines. It is therefore about 30 km (19 mi) longer than the hiking trail. The Rennsteig is connected to the E3 European long distance path, which goes from the Atlantic coast of Spain to the Black Sea coast of Bulgaria, and the E6 European long distance path, running from Arctic Finland to Turkey. After reunification, the educational system was reorganized. The University of Erfurt, founded in 1379 and closed in 1816, was refounded in 1994 with a focus on social sciences, modern languages, humanities and teacher training. Today there are approximately 6,000 students working within four faculties, the Max Weber Center for Advanced Cultural and Social Studies, and three academic research institutes. The university has an international reputation and participates in international student exchange programmes. The Fachhochschule Erfurt, is a university of applied sciences, founded in 1991, which offers a combination of academic training and practical experience in subjects such as social work and social pedagogy, business studies, and engineering. There are nearly 5,000 students in six faculties, of which the faculty of landscaping and horticulture has a national reputation. The International University of Applied Sciences Bad Honnef – Bonn (IUBH), is a privately run university with a focus on business and economics. It merged with the former Adam-Ries-Fachhochschule in 2013. The world renowned Bauhaus design school was founded in 1919 in the city of Weimar, approximately 20 km (12 mi) from Erfurt, 12 minutes by train. The buildings are now part of a World Heritage Site and are today used by the Bauhaus-Universität Weimar, which teaches design, arts, media and technology related subjects. Furthermore, there are eight Gymnasien, six state-owned, one Catholic and one Protestant (Evangelisches Ratsgymnasium Erfurt). One of the state-owned schools is a Sportgymnasium, an elite boarding school for young talents in athletics, swimming, ice skating or football. Another state-owned school, Albert Schweitzer Gymnasium, offers a focus in sciences as an elite boarding school in addition to the common curriculum. The German national public television children's channel KiKa is based in Erfurt. MDR, Mitteldeutscher Rundfunk, a radio and television company, has a broadcast centre and studios in Erfurt. The Thüringer Allgemeine is a statewide newspaper that is headquartered in the city. The first freely elected mayor after German reunification was Manfred Ruge of the Christian Democratic Union, who served from 1990 to 2006. Since 2006, Andreas Bausewein of the Social Democratic Party (SPD) has been mayor. The most recent mayoral election was held on 15 April 2018, with a runoff held on 29 April, and the results were as follows: The most recent city council election was held on 26 May 2019, and the results were as follows: Erfurt is twinned with:
[ { "paragraph_id": 0, "text": "Erfurt (German pronunciation: [ˈɛʁfʊʁt] ) is the capital and largest city of the Central German state of Thuringia. It is in the wide valley of the River Gera, in the southern part of the Thuringian Basin, north of the Thuringian Forest, and in the middle of a line of the six largest Thuringian cities (Thüringer Städtekette), stretching from Eisenach in the west, via Gotha, Erfurt, Weimar and Jena, to Gera in the east, close to the geographic centre of Germany. Erfurt is 100 km (62 mi) south-west of Leipzig, 250 km (155 mi) north-east of Frankfurt, 300 km (186 mi) south-west of Berlin and 400 km (249 mi) north of Munich.", "title": "" }, { "paragraph_id": 1, "text": "Erfurt's old town is one of the best preserved medieval city centres in Germany. Tourist attractions include the Merchants' Bridge (Krämerbrücke), the Old Synagogue (Alte Synagoge), the oldest in Europe and a UNESCO World Heritage Site, Cathedral Hill (Domberg) with the ensemble of Erfurt Cathedral (Erfurter Dom) and St Severus' Church (Severikirche) and Petersberg Citadel (Zitadelle Petersberg), one of the largest and best preserved town fortresses in Central Europe. The city's economy is based on agriculture, horticulture and microelectronics. Its central location has made it a logistics hub for Germany and central Europe. Erfurt hosts the second-largest trade fair in eastern Germany (after Leipzig), as well as the public television children's channel KiKa.", "title": "" }, { "paragraph_id": 2, "text": "The city is on the Via Regia, a medieval trade and pilgrims' road network. Modern Erfurt is also a hub for ICE high speed trains and other German and European transport networks. Erfurt was first mentioned in 742, as Saint Boniface founded the diocese. Although the town did not belong to any of the Thuringian states politically, it quickly became the economic centre of the region and was a member of the Hanseatic League. It was part of the Electorate of Mainz during the Holy Roman Empire, and became part of the Kingdom of Prussia in 1802. From 1949 until 1990 Erfurt was part of the German Democratic Republic (East Germany).", "title": "" }, { "paragraph_id": 3, "text": "The University of Erfurt was founded in 1379, making it the first university to be established within the geographic area which constitutes modern Germany. It closed in 1816 and was re-established in 1994. Martin Luther (1483–1546) was its most famous student, studying there from 1501 before entering St Augustine's Monastery in 1505. Other noted Erfurters include the medieval philosopher and mystic Meister Eckhart (c. 1260–1328), the Baroque composer Johann Pachelbel (1653–1706) and the sociologist Max Weber (1864–1920).", "title": "" }, { "paragraph_id": 4, "text": "Erfurt is an old Germanic settlement. The earliest evidence of human settlement dates from the prehistoric era; archaeological finds from the north of Erfurt revealed human traces from the paleolithic period, ca. 10,000 BCE.", "title": "History" }, { "paragraph_id": 5, "text": "To the west of Erfurt in Frienstedt existed, in the AD era, a big Germanic village, which was found during the construction of a highway. Where they also discovered the oldest Germanic word ever discovered in Central Germany written in runic script was found on a comb from a sacrificial shaft the word: \"kaba\". From Roman Times, however, they found 200 coins dating back to the third century, plus 150 Roman ceramic fragments and more than 200 fibulae. Also 11 inhumation graves of the Haßleben-Leuna group, which is an archeological cultural group.", "title": "History" }, { "paragraph_id": 6, "text": "The Melchendorf dig in the southern city part showed a settlement from the neolithic period. The Thuringii inhabited the Erfurt area in c. 480 and gave their name to Thuringia in c. 500.", "title": "History" }, { "paragraph_id": 7, "text": "The town is first mentioned in 742 under the name of \"Erphesfurt\": in that year, Saint Boniface wrote to Pope Zachary to inform him that he had established three dioceses in central Germany, one of them \"in a place called Erphesfurt, which for a long time has been inhabited by pagan natives.\" All three dioceses (the other two were Würzburg and Büraburg) were confirmed by Zachary the next year, though in 755 Erfurt was brought into the diocese of Mainz. That the place was populous already is borne out by archeological evidence, which includes 23 graves and six horse burials from the sixth and seventh centuries.", "title": "History" }, { "paragraph_id": 8, "text": "Throughout the Middle Ages, Erfurt was an important trading town because of its location, near a ford across the Gera river. Together with the other five Thuringian woad towns of Gotha, Tennstedt, Arnstadt and Langensalza it was the centre of the German woad trade, which made those cities very wealthy. Erfurt was the junction of important trade routes: the Via Regia was one of the most used east–west roads between France and Russia (via Frankfurt, Erfurt, Leipzig and Wrocław) and another route in the north–south direction was the connection between the Baltic Sea ports (e. g. Lübeck) and the potent upper Italian city-states like Venice and Milan.", "title": "History" }, { "paragraph_id": 9, "text": "During the tenth and eleventh centuries both the Emperor and the Electorate of Mainz held some privileges in Erfurt. The German kings had an important monastery on Petersberg hill and the Archbishops of Mainz collected taxes from the people. Around 1100, some people became free citizens by paying the annual \"Freizins\" (liberation tax), which marks a first step in becoming an independent city. During the 12th century, as a sign of more and more independence, the citizens built a city wall around Erfurt (in the area of today's Juri-Gagarin-Ring). After 1200, independence was fulfilled and a city council was founded in 1217; the town hall was built in 1275. In the following decades, the council bought a city-owned territory around Erfurt which consisted at its height of nearly 100 villages and castles and even another small town (Sömmerda). Erfurt became an important regional power between the Landgraviate of Thuringia around, the Electorate of Mainz to the west and the Electorate of Saxony to the east. Between 1306 and 1481, Erfurt was allied with the two other major Thuringian cities (Mühlhausen and Nordhausen) in the Thuringian City Alliance and the three cities joined the Hanseatic League together in 1430. A peak in economic development was reached in the 15th century, when the city had a population of 20,000 making it one of the largest in Germany. Between 1432 and 1446, a second and higher city wall was established. In 1483, a first city fortress was built on Cyriaksburg hill in the southwestern part of the town.", "title": "History" }, { "paragraph_id": 10, "text": "In the year 1184, Erfurt was the location of a notable accident called the Erfurter Latrinensturz ('Erfurt latrine fall'). King Henry VI held council in a building of the Erfurt Cathedral to negotiate peace between two of his vassals, Archbishop Konrad I of Mainz and Landgrave Ludwig III of Thuringia. The amassed weight of all the gathered men proved too heavy for the floor to bear, which collapsed. According to contemporary accounts, dozens of people fell to their death into the latrine pit below. Ludwig III, Konrad I and Henry VI survived the affair.", "title": "History" }, { "paragraph_id": 11, "text": "The Jewish community of Erfurt was founded in the 11th century and became, together with Mainz, Worms and Speyer, one of the most influential in Germany. The Old Synagogue is still extant and is a museum today, as is the mikveh at Gera river near Krämerbrücke. In 1349, during the wave of Black Death Jewish persecutions across Europe, the Jews of Erfurt were rounded up, with more than 100 killed and the rest driven from the city. Before the persecution, a wealthy Jewish merchant buried his property in the basement of his house. In 1998, this treasure was found during construction works. The Erfurt Treasure with various gold and silver objects is shown in the exhibition in the synagogue today. Only a few years after 1349, the Jews moved back to Erfurt and founded a second community, which was disbanded by the city council in 1458. Because of their exceptional testimony to the life of a medieval Jewish community, the Jewish sites in Erfurt were inscribed on the UNESCO World Heritage List in 2023.", "title": "History" }, { "paragraph_id": 12, "text": "In 1379, the University of Erfurt was founded. Together with the University of Cologne it was one of the first city-owned universities in Germany, while they were usually owned by the Landesherren. Some buildings of this old university are extant or restored in the \"Latin Quarter\" in the northern city centre (like Collegium Maius, student dorms \"Georgenburse\" and others, the hospital and the church of the university). The university quickly became a hotspot of German cultural life in Renaissance humanism with scholars like Ulrich von Hutten, Helius Eobanus Hessus and Justus Jonas.", "title": "History" }, { "paragraph_id": 13, "text": "In 1501 Martin Luther (1483–1546) moved to Erfurt and began his studies at the university. After 1505, he lived at St. Augustine's Monastery as a friar. In 1507 he was ordained as a priest in Erfurt Cathedral. He moved permanently to Wittenberg in 1511. Erfurt was an early adopter of the Protestant Reformation, in 1521.", "title": "History" }, { "paragraph_id": 14, "text": "In 1530, the city became one of the first in Europe to be officially bi-confessional with the Hammelburg Treaty. It kept that status through all the following centuries. The later 16th and the 17th century brought a slow economic decline of Erfurt. Trade shrank, the population was falling and the university lost its influence. The city's independence was endangered. In 1664, the city and surrounding area were brought under the dominion of the Electorate of Mainz and the city lost its independence. The Electorate built a huge fortress on Petersberg hill between 1665 and 1726 to control the city and instituted a governor to rule Erfurt.", "title": "History" }, { "paragraph_id": 15, "text": "In 1682 and 1683 Erfurt experienced the worst plague years in its history. In 1683 more than half of the population died because of the deadly disease.", "title": "History" }, { "paragraph_id": 16, "text": "In Erfurt witch-hunts are known from 1526 to 1705. Trial records are only incomplete. Twenty people were involved in witch trials and at least eight people died.", "title": "History" }, { "paragraph_id": 17, "text": "During the late 18th century, Erfurt saw another cultural peak. Governor Karl Theodor Anton Maria von Dalberg had close relations with Johann Wolfgang von Goethe, Friedrich Schiller, Johann Gottfried Herder, Christoph Martin Wieland and Wilhelm von Humboldt, who often visited him at his court in Erfurt.", "title": "History" }, { "paragraph_id": 18, "text": "Erfurt became part of the Kingdom of Prussia in 1802, to compensate for territories Prussia lost to France on the Left Bank of the Rhine. In the Capitulation of Erfurt, the city, its 12,000 Prussian and Saxon defenders under William VI, Prince of Orange-Nassau, 65 artillery pieces, and the Petersberg Citadel and Cyriaksburg Citadel Cyriaksburg, were handed over to the French on 16 October 1806. At the time of the capitulation, Joachim Murat, Marshal of France, had about 16,000 troops near Erfurt. With the attachment of the Saxe-Weimar territory of Blankenhain, the city became part of the First French Empire in 1806 as the Principality of Erfurt, directly subordinate to Napoleon as an \"imperial state domain\" (French: domaine réservé à l'empereur), separate from the Confederation of the Rhine, which the surrounding Thuringian states had joined. Erfurt was administered by a civilian and military Senate (Finanz- und Domänenkammer Erfurt) under a French governor, based in the Kurmainzische Statthalterei, previously the seat of the city's governor under the Electorate. Napoleon first visited the principality on 23 July 1807, inspecting the citadels and fortifications. In 1808, the Congress of Erfurt was held with Napoleon and Alexander I of Russia visiting the city.", "title": "History" }, { "paragraph_id": 19, "text": "During their administration, the French introduced street lighting and a tax on foreign horses to pay for maintaining the road surface. The Peterskirche suffered under the French occupation, with its inventory being auctioned off to other local churches – including the organ, bells and even the tower of the Corpus Christi chapel (Fronleichnamskapelle) – and the former monastery's library being donated to the University of Erfurt (and then to the Boineburg Library when the university closed in 1816). Similarly the Cyriaksburg Citadel was damaged by the French, with the city-side walls being partially dismantled in the hunt for imagined treasures from the convent, workers being paid from the sale of the building materials.", "title": "History" }, { "paragraph_id": 20, "text": "In 1811, to commemorate the birth of the Prince Imperial, a 70-foot (21-metre) ceremonial column (Die Napoleonsäule) of wood and plaster was erected on the common. Similarly, the Napoleonshöhe – a Greek-style temple topped by a winged victory with shield, sword and lance and containing a bust of Napoleon sculpted by Friedrich Döll – was erected in the Stiegerwald woods, including a grotto with fountain and flower beds, using a large pond (lavoratorium) from the Peterskirche, inaugurated with ceremony on 14 August 1811 after extravagant celebrations for Napoleon's birthday, which were repeated in 1812 with a concert in the Predigerkirche conducted by Louis Spohr.", "title": "History" }, { "paragraph_id": 21, "text": "With the Sixth Coalition forming after French defeat in Russia, on 24 February 1813 Napoleon ordered the Petersburg Citadel to prepare for siege, visiting the city on 25 April to inspect the fortifications, in particular both Citadels. On 10 July 1813, Napoleon put Alexandre d'Alton [fr], baron of the Empire, in charge of the defences of Erfurt. However, when the French decreed that 1000 men would be conscripted into the Grande Armée, the recruits were joined by other citizens in rioting on 19 July that led to 20 arrests, of whom 2 were sentenced to death by French court-martial; as a result, the French ordered the closure of all inns and alehouses.", "title": "History" }, { "paragraph_id": 22, "text": "Within a week of the Sixth Coalition's decisive victory at Leipzig (16–19 October 1813), however, Erfurt was besieged by Prussian, Austrian and Russian troops under the command of Prussian Lt Gen von Kleist. After a first capitulation signed by d'Alton on 20 December 1813 the French troops withdrew to the two fortresses of Petersberg and Cyriaksburg, allowing for the Coalition forces to march into Erfurt on 6 January 1814 to jubilant greetings; the Napoleonsäule ceremonial column was burned and destroyed as a symbol of the citizens' oppression under the French; similarly the Napoleonshöhe was burned on 1 November 1813 and completely destroyed by Erfurters and their besiegers in 1814. After a call for volunteers 3 days later, 300 Erfurters joined the Coalition armies in France. Finally, in May 1814, the French capitulated fully, with 1,700 French troops vacating the Petersberg and Cyriaksburg fortresses. During the two and a half months of siege, the mortality rate rose in the city greatly; 1,564 Erfurt citizens died in 1813, around a thousand more than the previous year.", "title": "History" }, { "paragraph_id": 23, "text": "After the Congress of Vienna, Erfurt was restored to Prussia on 21 June 1815, becoming the capital of one of the three districts (Regierungsbezirke) of the new Province of Saxony, but some southern and eastern parts of Erfurter lands joined Blankenhain in being transferred to the Grand Duchy of Saxe-Weimar-Eisenach the following September. Although enclosed by Thuringian territory in the west, south and east, the city remained part of the Prussian Province of Saxony until 1944.", "title": "History" }, { "paragraph_id": 24, "text": "After the 1848 Revolution, many Germans desired to have a united national state. An attempt in this direction was the failed Erfurt Union of German states in 1850.", "title": "History" }, { "paragraph_id": 25, "text": "The Industrial Revolution reached Erfurt in the 1840s, when the Thuringian Railway connecting Berlin and Frankfurt was built. During the following years, many factories in different sectors were founded. One of the biggest was the \"Royal Gun Factory of Prussia\" in 1862. After the Unification of Germany in 1871, Erfurt moved from the southern border of Prussia to the centre of Germany, so the fortifications of the city were no longer needed. The demolition of the city fortifications in 1873 led to a construction boom in Erfurt, because it was now possible to build in the area formerly occupied by the city walls and beyond. Many public and private buildings emerged and the infrastructure (such as a tramway, hospitals, and schools) improved rapidly. The number of inhabitants grew from 40,000 around 1870 to 130,000 in 1914 and the city expanded in all directions.", "title": "History" }, { "paragraph_id": 26, "text": "The \"Erfurt Program\" was adopted by the Social Democratic Party of Germany during its congress at Erfurt in 1891.", "title": "History" }, { "paragraph_id": 27, "text": "Between the wars, the city kept growing. Housing shortages were fought with building programmes and social infrastructure was broadened according to the welfare policy in the Weimar Republic. The Great Depression between 1929 and 1932 led to a disaster for Erfurt, nearly one out of three became unemployed. Conflicts between far-left and far-right-oriented milieus increased and many inhabitants supported the new Nazi government and Adolf Hitler. Others, especially some communist workers, put up resistance against the new administration. In 1938, the new synagogue was destroyed during the Kristallnacht. Jews lost their property and emigrated or were deported to Nazi concentration camps (together with many communists). In 1914, the company Topf and Sons began the manufacture of crematoria later becoming the market leader in this industry. Under the Nazis, JA Topf & Sons supplied specially developed crematoria, ovens and associated plants to the Auschwitz-Birkenau, Buchenwald and Mauthausen-Gusen concentration camps. On 27 January 2011 a memorial and museum dedicated to the Holocaust victims was opened at the former company premises in Erfurt.", "title": "History" }, { "paragraph_id": 28, "text": "During World War II, Erfurt experienced more than 27 British and American air raids, about 1600 civilians died. Bombed as a target of the Oil Campaign of World War II, Erfurt suffered only limited damage and was captured on 12 April 1945, by the US 80th Infantry Division. On 3 July, American troops left the city, which then became part of the Soviet Zone of Occupation and eventually of the German Democratic Republic (East Germany). In 1948, Erfurt became the capital of Thuringia, replacing Weimar. In 1952, the Länder in the GDR were dissolved in favour of centralization under the new socialist government. Erfurt then became the capital of a new \"Bezirk\" (district). In 1953, the Hochschule of education was founded, followed by the Hochschule of medicine in 1954, the first academic institutions in Erfurt since the closing of the university in 1816.", "title": "History" }, { "paragraph_id": 29, "text": "On 19 March 1970, the East and West German heads of government Willi Stoph and Willy Brandt met in Erfurt, the first such meeting since the division of Germany. During the 1970s and 1980s, as the economic situation in GDR worsened, many old buildings in city centre decayed, while the government fought against the housing shortage by building large Plattenbau settlements in the periphery. The Peaceful Revolution of 1989/1990 led to German reunification.", "title": "History" }, { "paragraph_id": 30, "text": "With the re-formation of the state of Thuringia in 1990, the city became the state capital. After reunification, a deep economic crisis occurred in Eastern Germany. Many factories closed and many people lost their jobs and moved to the former West Germany. At the same time, many buildings were redeveloped and the infrastructure improved massively. In 1994, the new university was opened, as was the Fachhochschule in 1991. Between 2005 and 2008, the economic situation improved as the unemployment rate decreased and new enterprises developed. In addition, the population began to increase once again.", "title": "History" }, { "paragraph_id": 31, "text": "A school shooting occurred on 26 April 2002 at the Gutenberg-Gymnasium.", "title": "History" }, { "paragraph_id": 32, "text": "Since the 1990s, organized crime has gained a foothold in Erfurt, with several mafia groups, including the Armenian mafia present in the city. Among other events, there has been a robbery and an arson attack targeting the gastronomy sector and in 2014 there was a shoot-out in an open street.", "title": "History" }, { "paragraph_id": 33, "text": "Erfurt is situated in the south of the Thuringian basin, a fertile agricultural area between the Harz mountains 80 km (50 mi) to the north and the Thuringian Forest 30 km (19 mi) to the southwest. Whereas the northern parts of the city area are flat, the southern ones consist of hilly landscape up to 430 m of elevation. In this part lies the municipal forest of Steigerwald with beeches and oaks as main tree species. To the east and to the west are some non-forested hills so that the Gera river valley within the town forms a basin. North of the city are some gravel pits in operation, while others are abandoned, flooded and used as leisure areas.", "title": "Geography" }, { "paragraph_id": 34, "text": "Erfurt has a humid continental climate (Dfb) or an oceanic climate (Cfb) according to the Köppen climate classification system. Summers are warm and sometimes humid with average high temperatures of 23 °C (73 °F) and lows of 12 °C (54 °F). Winters are relatively cold with average high temperatures of 2 °C (36 °F) and lows of −3 °C (27 °F). The city's topography creates a microclimate caused by the location inside a basin with sometimes inversion in winter (quite cold nights under −20 °C (−4 °F)) and inadequate air circulation in summer. Annual precipitation is only 502 millimeters (19.8 in) with moderate rainfall throughout the year. Light snowfall mainly occurs from December through February, but snow cover does not usually remain for long.", "title": "Geography" }, { "paragraph_id": 35, "text": "Erfurt abuts the districts of Sömmerda (municipalities Witterda, Elxleben, Walschleben, Riethnordhausen, Nöda, Alperstedt, Großrudestedt, Udestedt, Kleinmölsen and Großmölsen) in the north, Weimarer Land (municipalities Niederzimmern, Nohra, Mönchenholzhausen and Klettbach) in the east, Ilm-Kreis (municipalities Kirchheim, Rockhausen and Amt Wachsenburg) in the south and Gotha (municipalities Nesse-Apfelstädt, Nottleben, Zimmernsupra and Bienstädt) in the west.", "title": "Geography" }, { "paragraph_id": 36, "text": "The city itself is divided into 53 districts. The centre is formed by the district Altstadt (old town) and the Gründerzeit districts Andreasvorstadt in the northwest, Johannesvorstadt in the northeast, Krämpfervorstadt in the east, Daberstedt in the southeast, Löbervorstadt in the southwest and Brühlervorstadt in the west. More former industrial districts are Ilversgehofen (incorporated in 1911), Hohenwinden and Sulzer Siedlung in the north. Another group of districts is marked by Plattenbau settlements, constructed during the DDR period: Berliner Platz, Moskauer Platz, Rieth, Roter Berg and Johannesplatz in the northern as well as Melchendorf, Wiesenhügel and Herrenberg in the southern city parts.", "title": "Geography" }, { "paragraph_id": 37, "text": "Finally, there are many villages with an average population of approximately 1,000 which were incorporated during the 20th century; however, they have mostly stayed rural to date:", "title": "Geography" }, { "paragraph_id": 38, "text": "Erfurt-Southeast (German: Erfurt-Südost) is the collective name for a series of prefabricated housing areas that emerged in the south-east of Erfurt in the last ten years of the GDR.", "title": "Geography" }, { "paragraph_id": 39, "text": "The districts of Melchendorf , Herrenberg and Wiesen Hügel belong to Erfurt-Südost , all of which were formed from the former local area of Melchendorf. The village of Melchendorf with around 1000 inhabitants lies between the prefabricated building areas. In addition to the old village, the district of Melchendorf also includes the prefab housing areas of Drosselberg and Buchenberg as well as several four-story apartment blocks from the 1950s and 1960s on Kranichfelder Strasse. Around 24,000 people still live in the large settlement, which was once designed for almost 40,000 inhabitants.", "title": "Geography" }, { "paragraph_id": 40, "text": "In addition to Erfurt-Nord, Erfurt-Südost is the second large prefabricated building area in the state capital. The problems associated with large housing estates are not as pronounced in the Southeast as in the North, but they are still present. Erfurt-Südost is the collective name for a series of prefabricated housing areas that emerged in the south-east of Erfurt in the last ten years of the GDR.", "title": "Geography" }, { "paragraph_id": 41, "text": "The districts of Melchendorf , Herrenberg and Wiesen Hügel belong to Erfurt-Südost , all of which were formed from the former local area of Melchendorf. The village of Melchendorf with around 1000 inhabitants lies between the prefabricated building areas. In addition to the old village, the district of Melchendorf also includes the prefab housing areas of Drosselberg and Buchenberg as well as several four-story apartment blocks from the 1950s and 1960s on Kranichfelder Strasse. Around 24,000 people still live in the large settlement, which was once designed for almost 40,000 inhabitants.", "title": "Geography" }, { "paragraph_id": 42, "text": "In addition to Erfurt-Nord, Erfurt-Südost is the second large prefabricated building area in the state capital. The problems associated with large housing estates are not as pronounced in the Southeast as in the North, but they are still present.", "title": "Geography" }, { "paragraph_id": 43, "text": "Around the year 1500, the city had 18,000 inhabitants and was one of the largest cities in the Holy Roman Empire. The population then more or less stagnated until the 19th century. The population of Erfurt was 21,000 in 1820, and increased to 32,000 in 1847, the year of rail connection as industrialization began. In the following decades Erfurt grew up to 130,000 at the beginning of World War I and 190,000 inhabitants in 1950. A maximum was reached in 1988 with 220,000 persons. In 1991, after the German reunification and when Erfurt became the capital of Thuringia state, it had a population of about 205,000. The bad economic situation in eastern Germany after the reunification resulted in a decline in population, which fell to 200,000 in 2002 before rising again to 206,000 in 2011. The average growth of population between 2009 and 2012 was approximately 0.68% p. a, whereas the population in bordering rural regions is shrinking with accelerating tendency. Suburbanization played only a small role in Erfurt. It occurred after reunification for a short time in the 1990s, but most of the suburban areas were situated within the administrative city borders. Erfurt is also the 10th largest city in Germany by area with area of 269.17 km (103.93 sq mi).", "title": "Population" }, { "paragraph_id": 44, "text": "The birth deficit was 200 in 2012, this is −1.0 per 1,000 inhabitants (Thuringian average: -4.5; national average: -2.4). The net migration rate was +8.3 per 1,000 inhabitants in 2012 (Thuringian average: -0.8; national average: +4.6). The most important regions of origin of Erfurt migrants are rural areas of Thuringia, Saxony-Anhalt and Saxony as well as foreign countries like Poland, Russia, Syria, Afghanistan and Hungary. Erfurt is today one of the popular cities in former East Germany due to its universities and broadcasting companies.", "title": "Population" }, { "paragraph_id": 45, "text": "Like other eastern German cities, foreigners account only for a small share of Erfurt's population: circa 3.0% are non-Germans by citizenship and overall 5.9% are migrants (according to the 2011 EU census).", "title": "Population" }, { "paragraph_id": 46, "text": "Due to the official atheism of the former GDR, most of the population is non-religious. 14.8% are members of the Evangelical Church in Central Germany and 6.8% are Catholics (according to the 2011 EU census). The Jewish Community consists of 500 members. Most of them migrated to Erfurt from Russia and Ukraine in the 1990s.", "title": "Population" }, { "paragraph_id": 47, "text": "The theologian, philosopher and mystic Meister Eckhart (c. 1260–1328) entered the Dominican monastery (Predigerkloster) in Erfurt when he was aged about 18 (around 1275). Eckhart was the Dominican prior at Erfurt from 1294 until 1298, and Vicar of Thuringia from 1298 to 1302. After a year in Paris, he returned to Erfurt in 1303 and administered his duties as Provincial of Saxony from there until 1311.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 48, "text": "Martin Luther (1483–1546) studied law and philosophy at the University of Erfurt from 1501. He lived in St Augustine's Monastery in Erfurt, as a friar from 1505 to 1511.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 49, "text": "Johann Pachelbel (1653–1706) served as organist at the Predigerkirche (Preachers Church) in Erfurt from June 1678 until August 1690. Pachelbel composed approximately seventy pieces for organ while in Erfurt.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 50, "text": "The city is the birthplace of one of Johann Sebastian Bach's cousins, Johann Bernhard Bach, as well as Johann Sebastian Bach's father Johann Ambrosius Bach. Bach's parents were married in 1668 in the Kaufmannskirche (Merchant's Church) that still exists on the main square of Anger.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 51, "text": "Alexander Müller (1808–1863), pianist, conductor and composer, was born in Erfurt. He later moved to Zürich where he served as leader of the General Music Society's subscription concerts series.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 52, "text": "Max Weber (1864–1920) was born in Erfurt. He was a sociologist, philosopher, lawyer, and political economist whose ideas have profoundly influenced modern social theory and social research.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 53, "text": "After 1906 the composer Richard Wetz (1875–1935) lived in Erfurt and became the leading person in the city's musical life. His major works were written here, including three symphonies, a Requiem and a Christmas Oratorio.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 54, "text": "The textile designer Margaretha Reichardt (1907–1984) was born and died in Erfurt. She studied at the Bauhaus from 1926 to 1930, and while there worked with Marcel Breuer on his innovative chair designs. Her former home and weaving workshop in Erfurt, the Margaretha Reichardt Haus, is now a museum, managed by the Angermuseum Erfurt.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 55, "text": "Famous contemporary musicians from Erfurt are Clueso, the Boogie Pimps and Yvonne Catterfeld.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 56, "text": "Erfurt has a great variety of museums:", "title": "Culture, sights and cityscape" }, { "paragraph_id": 57, "text": "Since 2003, the modern opera house is home to Theater Erfurt and its Philharmonic Orchestra. The \"grand stage\" section has 800 seats and the \"studio stage\" can hold 200 spectators. In September 2005, the opera Waiting for the Barbarians by Philip Glass premiered in the opera house. The Erfurt Theatre has been a source of controversy. In 2005, a performance of Engelbert Humperdinck's opera Hänsel und Gretel stirred up the local press since the performance contained suggestions of pedophilia and incest. The opera was advertised in the programme with the addition \"for adults only\".", "title": "Culture, sights and cityscape" }, { "paragraph_id": 58, "text": "On 12 April 2008, a version of Verdi's opera Un ballo in maschera directed by Johann Kresnik opened at the Erfurt Theatre. The production stirred deep controversy by featuring nude performers in Mickey Mouse masks dancing on the ruins of the World Trade Center and a female singer with a painted on Hitler toothbrush moustache performing a straight arm Nazi salute, along with sinister portrayals of American soldiers, Uncle Sam, and Elvis Presley impersonators. The director described the production as a populist critique of modern American society, aimed at showing up the disparities between rich and poor. The controversy prompted one local politician to call for locals to boycott the performances, but this was largely ignored and the première was sold out.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 59, "text": "The Messe Erfurt serves as home court for the Oettinger Rockets, a professional basketball team in Germany's first division, the Basketball Bundesliga.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 60, "text": "Notable types of sport in Erfurt are athletics, ice skating, cycling (with the oldest velodrome in use in the world, opened in 1885), swimming, handball, volleyball, tennis and football. The city's football club FC Rot-Weiß Erfurt is member of 3. Fußball-Liga and based in Steigerwaldstadion with a capacity of 20,000. The Gunda-Niemann-Stirnemann Halle was the second indoor speed skating arena in Germany.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 61, "text": "Erfurt's cityscape features a medieval core of narrow, curved alleys in the centre surrounded by a belt of Gründerzeit architecture, created between 1873 and 1914. In 1873, the city's fortifications were demolished and it became possible to build houses in the area in front of the former city walls. In the following years, Erfurt saw a construction boom. In the northern area (districts Andreasvorstadt, Johannesvorstadt and Ilversgehofen) tenements for the factory workers were built whilst the eastern area (Krämpfervorstadt and Daberstedt) featured apartments for white-collar workers and clerks and the southwestern part (Löbervorstadt and Brühlervorstadt) with its beautiful valley landscape saw the construction of villas and mansions of rich factory owners and notables.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 62, "text": "During the interwar period, some settlements in Bauhaus style were realized, often as housing cooperatives.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 63, "text": "After World War II and over the whole GDR period, housing shortages remained a problem even though the government started a big apartment construction programme. Between 1970 and 1990 large Plattenbau settlements with high-rise blocks on the northern (for 50,000 inhabitants) and southeastern (for 40,000 inhabitants) periphery were constructed. After reunification the renovation of old houses in city centre and the Gründerzeit areas was a big issue. The federal government granted substantial subsidies, so that many houses could be restored.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 64, "text": "Compared to many other German cities, little of Erfurt was destroyed in World War II. This is one reason why the centre today offers a mixture of medieval, Baroque and Neoclassical architecture as well as buildings from the last 150 years.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 65, "text": "Public green spaces are located along Gera river and in several parks like the Stadtpark, the Nordpark and the Südpark. The largest green area is the Egapark [de], a horticultural exhibition park and botanic garden established in 1961.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 66, "text": "The city centre has about 25 churches and monasteries, most of them in Gothic style, some also in Romanesque style or a mixture of Romanesque and Gothic elements, and a few in later styles. The various steeples characterize the medieval centre and led to one of Erfurt's nicknames as the \"Thuringian Rome\".", "title": "Culture, sights and cityscape" }, { "paragraph_id": 67, "text": "The oldest parts of Erfurt's Alte Synagoge (Old Synagogue) date to the 11th century. It was used until 1349 when the Jewish community was destroyed in a pogrom known as the Erfurt Massacre. The building had many other uses since then. It was conserved in the 1990s and in 2009 it became a museum of Jewish history. A rare Mikveh, a ritual bath, dating from c.1250, was discovered by archeologists in 2007. It has been accessible to visitors on guided tours since September 2011. The Jewish heritage of Erfurt including the Old Synagogue and Mikveh became a UNESCO World Heritage Site in September 2023 and is the second Jewish heritage in Germany that is listed on UNESCO.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 68, "text": "As religious freedom was granted in the 19th century, some Jews returned to Erfurt. They built their synagogue on the banks of the Gera river and used it from 1840 until 1884. The neoclassical building is known as the Kleine Synagoge (Small Synagogue). Today it is used an events centre. It is also open to visitors.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 69, "text": "A larger synagogue, the Große Synagoge (Great Synagogue), was opened in 1884 because the community had become larger and wealthier. This moorish style building was destroyed during nationwide Nazi riots, known as Kristallnacht on 9–10 November 1938.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 70, "text": "In 1947 the land which the Great Synagogue had occupied was returned to the Jewish community and they built their current place of worship, the Neue Synagoge (New Synagogue) which opened in 1952. It was the only synagogue building erected under communist rule in East Germany.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 71, "text": "Besides the religious buildings there is a lot of historic secular architecture in Erfurt, mostly concentrated in the city centre, but some 19th- and 20th-century buildings are located on the outskirts.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 72, "text": "From 1066 until 1873 the old town of Erfurt was encircled by a fortified wall. About 1168 this was extended to run around the western side of Petersberg hill, enclosing it within the city boundaries.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 73, "text": "After German Unification in 1871, Erfurt became part of the newly created German Empire. The threat to the city from its Saxon neighbours and from Bavaria was no longer present, so it was decided to dismantle the city walls. Only a few remnants remain today. A piece of inner wall can be found in a small park at the corner Juri-Gagarin-Ring and Johannesstraße and another piece at the flood ditch (Flutgraben) near Franckestraße. There is also a small restored part of the wall in the Brühler Garten, behind the Catholic orphanage. Only one of the wall's fortified towers was left standing, on Boyneburgufer, but this was destroyed in an air raid in 1944.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 74, "text": "The Petersberg Citadel is one of the largest and best preserved city fortresses in Europe, covering an area of 36 hectares in the north-west of the city centre. It was built from 1665 on Petersberg hill and was in military use until 1963. Since 1990, it has been significantly restored and is now open to the public as an historic site.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 75, "text": "The Cyriaksburg Citadel [de] is a smaller citadel south-west of the city centre, dating from 1480. Today, it houses the German horticulture museum.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 76, "text": "Between 1873 and 1914, a belt of Gründerzeit architecture emerged around the city centre. The mansion district in the south-west around Cyriakstraße, Richard-Breslau-Straße and Hochheimer Straße hosts some interesting Gründerzeit and Art Nouveau buildings.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 77, "text": "The \"Mühlenviertel\" (\"mill quarter\"), is an area of beautiful Art Nouveau apartment buildings, cobblestone streets and street trees just to the north of the old city, in the vicinity of Nord Park, bordered by the Gera river on its east side. The Schmale Gera stream runs through the area. In the Middle Ages numerous small enterprises using the power of water mills occupied the area, hence the name \"Mühlenviertel\", with street names such as Waidmühlenweg (woad, or indigo, mill way), Storchmühlenweg (stork mill way) and Papiermühlenweg (paper mill way).", "title": "Culture, sights and cityscape" }, { "paragraph_id": 78, "text": "The Bauhaus style is represented by some housing cooperative projects in the east around Flensburger Straße and Dortmunder Straße and in the north around Neuendorfstraße. Lutherkirke Church in Magdeburger Allee (1927), is an Art Deco building.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 79, "text": "The former malt factory \"Wolff\" at Theo-Neubauer-Straße in the east of Erfurt is a large industrial complex built between 1880 and 1939, and in use until 2000. A new use has not been found yet, but the area is sometimes used as a location in movie productions because of its atmosphere.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 80, "text": "Examples of Nazi architecture include the buildings of the Landtag (Thuringian parliament) and Thüringenhalle (an event hall) in the south at Arnstädter Straße. While the Landtag building (1930s) represents more the neo-Roman/fascist style, Thüringenhalle (1940s) is marked by some neo-Germanic Heimatschutz style elements.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 81, "text": "The Stalinist early-GDR style is manifested in the main building of the university at Nordhäuser Straße (1953) and the later more international modern GDR style is represented by the horticultural exhibition centre \"Egapark\" at Gothaer Straße, the Plattenbau housing complexes like Rieth or Johannesplatz and the redevelopment of Löbertor and Krämpfertor area along Juri-Gagarin-Ring in the city centre.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 82, "text": "The current international glass and steel architecture is dominant among most larger new buildings like the Federal Labour Court of Germany (1999), the new opera house (2003), the new main station (2007), the university library, the Erfurt Messe (convention centre) and the Gunda Niemann-Stirnemann ice rink.", "title": "Culture, sights and cityscape" }, { "paragraph_id": 83, "text": "During recent years, the economic situation of the city improved: the unemployment rate declined from 21% in 2005 to 9% in 2013. Nevertheless, some 14,000 households with 24,500 persons (12% of population) are dependent upon state social benefits (Hartz IV).", "title": "Economy and infrastructure" }, { "paragraph_id": 84, "text": "Farming has a great tradition in Erfurt: the cultivation of woad made the city rich during the Middle Ages. Today, horticulture and the production of flower seeds is still an important business in Erfurt. There is also growing of fruits (like apples, strawberries and sweet cherries), vegetables (e.g. cauliflowers, potatoes, cabbage and sugar beets) and grain on more than 60% of the municipal territory.", "title": "Economy and infrastructure" }, { "paragraph_id": 85, "text": "Industrialization in Erfurt started around 1850. Until World War I, many factories were founded in different sectors like engine building, shoes, guns, malt and later electro-technics, so that there was no industrial monoculture in the city. After 1945, the companies were nationalized by the GDR government, which led to the decline of some of them. After reunification, nearly all factories were closed, either because they failed to successfully adopt to a free market economy or because the German government sold them to west German businessmen who closed them to avoid competition to their own enterprises. However, in the early 1990s the federal government started to subsidize the foundation of new companies. It still took a long time before the economic situation stabilized around 2006. Since this time, unemployment has decreased and overall, new jobs were created. Today, there are many small and medium-sized companies in Erfurt with electro-technics, semiconductors and photovoltaics in focus. Engine production, food production, the Braugold brewery, and Born Feinkost, a producer of Thuringian mustard, remain important industries.", "title": "Economy and infrastructure" }, { "paragraph_id": 86, "text": "Erfurt is an Oberzentrum (which means \"supra-centre\" according to Central place theory) in German regional planning. Such centres are always hubs of service businesses and public services like hospitals, universities, research, trade fairs, retail etc. Additionally, Erfurt is the capital of the federal state of Thuringia, so that there are many institutions of administration like all the Thuringian state ministries and some nationwide authorities. Typical for Erfurt are the logistic business with many distribution centres of big companies, the Erfurt Trade Fair and the media sector with KiKa and MDR as public broadcast stations. A growing industry is tourism, due to the various historical sights of Erfurt. There are 4,800 hotel beds and (in 2012) 450,000 overnight visitors spent a total of 700,000 nights in hotels. Nevertheless, most tourists are one-day visitors from Germany. The Christmas Market in December attracts some 2,000,000 visitors each year.", "title": "Economy and infrastructure" }, { "paragraph_id": 87, "text": "The ICE railway network puts Erfurt 11⁄2 hours from Berlin, 21⁄2 hours from Frankfurt, 2 hours from Dresden, and 45 minutes from Leipzig. In 2017, the ICE line to Munich opened, making the trip to Erfurt main station only 21⁄2 hours.", "title": "Economy and infrastructure" }, { "paragraph_id": 88, "text": "There are regional trains from Erfurt to Weimar, Jena, Gotha, Eisenach, Bad Langensalza, Magdeburg, Nordhausen, Göttingen, Mühlhausen, Würzburg, Meiningen, Ilmenau, Arnstadt, and Gera.", "title": "Economy and infrastructure" }, { "paragraph_id": 89, "text": "In freight transport there is an intermodal terminal in the district of Vieselbach (Güterverkehrszentrum, GVZ) with connections to rail and the autobahn.", "title": "Economy and infrastructure" }, { "paragraph_id": 90, "text": "The two Autobahnen crossing each other nearby at Erfurter Kreuz are the Bundesautobahn 4 (Frankfurt–Dresden) and the Bundesautobahn 71 (Schweinfurt–Sangerhausen). Together with the east tangent both motorways form a circle road around the city and lead the interregional traffic around the centre. Whereas the A 4 was built in the 1930s, the A 71 came into being after the reunification in the 1990s and 2000s. In addition to both motorways there are two Bundesstraßen: the Bundesstraße 7 connects Erfurt parallel to A 4 with Gotha in the west and Weimar in the east. The Bundesstraße 4 is a connection between Erfurt and Nordhausen in the north. Its southern part to Coburg was annulled when A 71 was finished (in this section, the A 71 now effectively serves as B 4). Within the circle road, B 7 and B 4 are also annulled, so that the city government has to pay for maintenance instead of the German federal government. The access to the city is restricted as Umweltzone since 2012 for some vehicles. Large parts of the inner city are a pedestrian area which can not be reached by car (except for residents).", "title": "Economy and infrastructure" }, { "paragraph_id": 91, "text": "The Erfurt public transport system is marked by the area-wide Erfurt Stadtbahn (light rail) network, established as a tram system in 1883, upgraded to a light rail (Stadtbahn) system in 1997, and continually expanded and upgraded through the 2000s. Today, there are six Stadtbahn lines running every ten minutes on every light rail route.", "title": "Economy and infrastructure" }, { "paragraph_id": 92, "text": "Additionally, Erfurt operates a bus system, which connects the sparsely populated outer districts of the region to the city center. Both systems are organized by SWE EVAG, a transit company owned by the city administration. Trolleybuses were in service in Erfurt from 1948 until 1975, but are no longer in service.", "title": "Economy and infrastructure" }, { "paragraph_id": 93, "text": "Erfurt-Weimar Airport lies 3 km (2 mi) west of the city centre. It is linked to the central train station via Stadtbahn (tram). It was significantly extended in the 1990s, with flights mostly to Mediterranean holiday destinations and to London during the peak Christmas market tourist season. Connections to longer haul flights are easily accessible via Frankfurt Airport, which can be reached in 2 hours via a direct train from Frankfurt Airport to Erfurt, and from Leipzig/Halle Airport, which can be reached within half an hour.", "title": "Economy and infrastructure" }, { "paragraph_id": 94, "text": "Biking is becoming increasingly popular since construction of high quality cycle tracks began in the 1990s. There are cycle lanes for general commuting within Erfurt city.", "title": "Economy and infrastructure" }, { "paragraph_id": 95, "text": "Long-distance trails, such as the Gera track and the Radweg Thüringer Städtekette (Thuringian cities trail), connect points of tourist interest. The former runs along the Gera river valley from the Thuringian Forest to the river Unstrut; the latter follows the medieval Via Regia from Eisenach to Altenburg via Gotha, Erfurt, Weimar, and Jena.", "title": "Economy and infrastructure" }, { "paragraph_id": 96, "text": "The Rennsteig Cycle Way was opened in 2000. This designated high-grade hiking and bike trail runs along the ridge of the Thuringian Central Uplands. The bike trail, about 200 km (124 mi) long, occasionally departs from the course of the historic Rennsteig hiking trail, which dates back to the 1300s, to avoid steep inclines. It is therefore about 30 km (19 mi) longer than the hiking trail.", "title": "Economy and infrastructure" }, { "paragraph_id": 97, "text": "The Rennsteig is connected to the E3 European long distance path, which goes from the Atlantic coast of Spain to the Black Sea coast of Bulgaria, and the E6 European long distance path, running from Arctic Finland to Turkey.", "title": "Economy and infrastructure" }, { "paragraph_id": 98, "text": "After reunification, the educational system was reorganized. The University of Erfurt, founded in 1379 and closed in 1816, was refounded in 1994 with a focus on social sciences, modern languages, humanities and teacher training. Today there are approximately 6,000 students working within four faculties, the Max Weber Center for Advanced Cultural and Social Studies, and three academic research institutes. The university has an international reputation and participates in international student exchange programmes.", "title": "Economy and infrastructure" }, { "paragraph_id": 99, "text": "The Fachhochschule Erfurt, is a university of applied sciences, founded in 1991, which offers a combination of academic training and practical experience in subjects such as social work and social pedagogy, business studies, and engineering. There are nearly 5,000 students in six faculties, of which the faculty of landscaping and horticulture has a national reputation.", "title": "Economy and infrastructure" }, { "paragraph_id": 100, "text": "The International University of Applied Sciences Bad Honnef – Bonn (IUBH), is a privately run university with a focus on business and economics. It merged with the former Adam-Ries-Fachhochschule in 2013.", "title": "Economy and infrastructure" }, { "paragraph_id": 101, "text": "The world renowned Bauhaus design school was founded in 1919 in the city of Weimar, approximately 20 km (12 mi) from Erfurt, 12 minutes by train. The buildings are now part of a World Heritage Site and are today used by the Bauhaus-Universität Weimar, which teaches design, arts, media and technology related subjects.", "title": "Economy and infrastructure" }, { "paragraph_id": 102, "text": "Furthermore, there are eight Gymnasien, six state-owned, one Catholic and one Protestant (Evangelisches Ratsgymnasium Erfurt). One of the state-owned schools is a Sportgymnasium, an elite boarding school for young talents in athletics, swimming, ice skating or football. Another state-owned school, Albert Schweitzer Gymnasium, offers a focus in sciences as an elite boarding school in addition to the common curriculum.", "title": "Economy and infrastructure" }, { "paragraph_id": 103, "text": "The German national public television children's channel KiKa is based in Erfurt.", "title": "Economy and infrastructure" }, { "paragraph_id": 104, "text": "MDR, Mitteldeutscher Rundfunk, a radio and television company, has a broadcast centre and studios in Erfurt.", "title": "Economy and infrastructure" }, { "paragraph_id": 105, "text": "The Thüringer Allgemeine is a statewide newspaper that is headquartered in the city.", "title": "Economy and infrastructure" }, { "paragraph_id": 106, "text": "The first freely elected mayor after German reunification was Manfred Ruge of the Christian Democratic Union, who served from 1990 to 2006. Since 2006, Andreas Bausewein of the Social Democratic Party (SPD) has been mayor. The most recent mayoral election was held on 15 April 2018, with a runoff held on 29 April, and the results were as follows:", "title": "Politics" }, { "paragraph_id": 107, "text": "The most recent city council election was held on 26 May 2019, and the results were as follows:", "title": "Politics" }, { "paragraph_id": 108, "text": "", "title": "Politics" }, { "paragraph_id": 109, "text": "Erfurt is twinned with:", "title": "Twin towns – sister cities" } ]
Erfurt is the capital and largest city of the Central German state of Thuringia. It is in the wide valley of the River Gera, in the southern part of the Thuringian Basin, north of the Thuringian Forest, and in the middle of a line of the six largest Thuringian cities, stretching from Eisenach in the west, via Gotha, Erfurt, Weimar and Jena, to Gera in the east, close to the geographic centre of Germany. Erfurt is 100 km (62 mi) south-west of Leipzig, 250 km (155 mi) north-east of Frankfurt, 300 km (186 mi) south-west of Berlin and 400 km (249 mi) north of Munich. Erfurt's old town is one of the best preserved medieval city centres in Germany. Tourist attractions include the Merchants' Bridge (Krämerbrücke), the Old Synagogue, the oldest in Europe and a UNESCO World Heritage Site, Cathedral Hill (Domberg) with the ensemble of Erfurt Cathedral and St Severus' Church (Severikirche) and Petersberg Citadel, one of the largest and best preserved town fortresses in Central Europe. The city's economy is based on agriculture, horticulture and microelectronics. Its central location has made it a logistics hub for Germany and central Europe. Erfurt hosts the second-largest trade fair in eastern Germany, as well as the public television children's channel KiKa. The city is on the Via Regia, a medieval trade and pilgrims' road network. Modern Erfurt is also a hub for ICE high speed trains and other German and European transport networks. Erfurt was first mentioned in 742, as Saint Boniface founded the diocese. Although the town did not belong to any of the Thuringian states politically, it quickly became the economic centre of the region and was a member of the Hanseatic League. It was part of the Electorate of Mainz during the Holy Roman Empire, and became part of the Kingdom of Prussia in 1802. From 1949 until 1990 Erfurt was part of the German Democratic Republic. The University of Erfurt was founded in 1379, making it the first university to be established within the geographic area which constitutes modern Germany. It closed in 1816 and was re-established in 1994. Martin Luther (1483–1546) was its most famous student, studying there from 1501 before entering St Augustine's Monastery in 1505. Other noted Erfurters include the medieval philosopher and mystic Meister Eckhart, the Baroque composer Johann Pachelbel (1653–1706) and the sociologist Max Weber (1864–1920).
2001-05-17T18:43:01Z
2023-12-27T13:45:30Z
[ "Template:Clarify", "Template:Election table", "Template:Increase", "Template:Reflist", "Template:Use dmy dates", "Template:Convert", "Template:Historical populations", "Template:Short description", "Template:Hanseatic League", "Template:Citation", "Template:Webarchive", "Template:Cities in Germany", "Template:Cities in Thuringia", "Template:Authority control", "Template:IPA-de", "Template:Lang-fr", "Template:Frac", "Template:Div col end", "Template:Capitals of the states of the Federal Republic of Germany", "Template:List of European capitals by region", "Template:Weather box", "Template:Div col", "Template:Lang-de", "Template:Update inline", "Template:Cite web", "Template:Infobox German location", "Template:Lang", "Template:Interlanguage link multi", "Template:Ws", "Template:More citations needed", "Template:Cite journal", "Template:Flagicon", "Template:Cite book", "Template:Geographic location", "Template:See also", "Template:Citation needed", "Template:Refn", "Template:Decrease", "Template:Cite news", "Template:Commons", "Template:Wikivoyage", "Template:Germany districts Thuringia", "Template:Use British English", "Template:Main", "Template:Unreferenced section", "Template:Bezirke DDR Seats" ]
https://en.wikipedia.org/wiki/Erfurt
9,482
Enya
Eithne Pádraigín Ní Bhraonáin (born 17 May 1961), known mononymously as Enya, is an Irish singer and composer. Noted for her modern Celtic music, she is the best-selling Irish solo artist and the second-best-selling Irish music act overall, after rock band U2. Enya was raised in the Irish-speaking region of Gweedore. In 1980, Enya (as Eithne Ní Bhraonáin) began her musical career playing alongside her family's Celtic folk band Clannad. She left Clannad in 1982 to pursue a solo career, working with the former Clannad manager and producer, Nicky Ryan, and his partner Roma, as their lyricist. Over the following four years, Enya developed her sound by combining multitracked vocals and keyboards with elements from a variety of musical genres such as Celtic, classical, church, jazz, new age, world, pop, and Irish folk. The two earliest releases by Enya were instrumentals for the Touch Travel (1984) cassette compilation. She composed most of the soundtrack and sang two songs intended for The Frog Prince (1985), and a body of work for the BBC documentary series The Celts (1986). A selection of her pieces for The Celts were released as her debut album, Enya (1987). She later signed with Warner Music UK, which granted her considerable artistic freedom and minimal interference. The success of Watermark (1988) propelled Enya to worldwide fame, helped mostly by the international hit single "Orinoco Flow (Sail Away)". This was followed by the multi-million-selling albums Shepherd Moons (1991), The Memory of Trees (1995), and A Day Without Rain (2000). Sales of A Day Without Rain and its lead single, "Only Time", surged in the United States following its use in media coverage of the 9/11 attacks. After Amarantine (2005) and And Winter Came... (2008), Enya took a four-year break from music, returning in 2012 to begin work on her eighth studio album Dark Sky Island (2015). According to her sister Moya, Enya was recording music as of 2019. Eithne Pádraigín Ní Bhraonáin was born in the Dore area of Gweedore, County Donegal, on 17 May 1961, the sixth of nine children in the Brennan family of musicians, born to Máire "Baba" and Leopold "Leo" Brennan. In 1968, the couple took ownership of a pub in Meenaleck, Co. Donegal, naming it Leo's Tavern. Leo Brennan (1925-2016) was the leader of an Irish showband named the Slieve Foy Band, before performing solo. Baba Brennan (née Duggan; born 1930) has remote Spanish roots with ancestors who settled on Tory Island and she was an amateur musician who played with the Slieve Foy Band. Enya's mother also taught music at Gweedore Community School. Enya grew up in Gweedore, a region where Irish is the primary language. Her name is anglicised as Enya Patricia Brennan, with "Enya" being the phonetic spelling of how "Eithne" is pronounced in her native Ulster dialect. "Ní Bhraonáin" translates to "daughter of Brennan". Enya's maternal grandfather Aodh, was the headmaster of the primary school in Dore where her grandmother was a teacher. Aodh was also the founder of the Gweedore Theatre company. Enya has described her upbringing as "very quiet and happy." At three-and-a-half years of age, she took part in her first singing competition at the annual Feis Ceoil music festival. Enya also participated in pantomimes at Gweedore Theatre and sang with her siblings in their mother's choir at St Mary's church in Derrybeg. At the age of four, she began piano lessons and was learning English throughout primary school. She later said, "I had to do school work and then travel to a neighbouring town for piano lessons, and then more school work. I remember my brothers and sisters playing outside and I would be inside playing the piano, this one big book of scales, practising them over and over." As well as traditional Irish music, Enya and her siblings were introduced to a variety of music in the 60s and 70s, and enjoyed watching musical films. In a radio interview with Elaine Page in November 2008, Enya shared a selection of favourite songs from musicals. She said of Jesus Christ Superstar, "it was such an original piece of music in 1970 [...] played in my house every single day, and myself and my sisters would sing word for word". From the age of 11, Enya attended a convent boarding school, Milford College, in Milford run by the Sisters of Loreto; her education there was paid for by her grandfather. The boarding school, now Loreto Community School, was where Enya developed a taste for classical music, art, Latin, and watercolour painting. She said, "It was devastating to be torn away from such a large family but it was good for my music." Enya finished boarding school at age 17 in the late 1970s, and studied classical music at college the following year, with the original intention to be teaching the piano, rather than composing and performing her own music. In 1970, several members of Enya's family formed Clannad, a Celtic folk band. Clannad hired Nicky Ryan as their manager, sound engineer, and producer, and Ryan's future wife, Roma Shane, as tour manager and administrator. In 1980, after a year at college, Enya decided not to pursue a music degree and instead accepted Ryan's invitation to join Clannad, having wanted to expand their sound with keyboards and an additional vocalist. Enya performed an uncredited role on their sixth studio album, Crann Úll (1980), with a line-up of elder siblings Máire, Pól, and Ciarán Brennan, and twin uncles Noel and Pádraig Duggan. She features in their follow-up, Fuaim (1981), singing the song An Túll. From Ciarán's perspective, Enya was a "hired hand" and not a full member, commenting that "she was 18, 19 and we were paying her £500 sterling a week." Nicky said it was not his intention to make Enya a permanent member, as she was "fiercely independent [...] intent on playing her own music. She was just not sure of how to go about it." Nicky discussed the idea of layering vocals to create a "choir of one" with Enya, a concept inspired by Phil Spector's Wall of Sound technique that had interested them both. During a Clannad tour in 1982, Nicky called for a band meeting to address internal issues that had arisen, primarily around the drinking of one or two members. He recalled: "It was short and only required a vote, I was a minority of one and lost. Roma and I were out. This left the question of what happened with Enya. I decided to stand back and say nothing." Enya chose to leave with the Ryans and pursue a solo career, having felt confined in the group and disliking being "somebody in the background". The split caused some friction between the parties, but in time, they settled their differences. Enya's brother Ciarán also spoke to Nicky Ryan around 2006, interested in recording in their studio with her, but Ryan suggested that this was unlikely to occur. Nicky suggested to Enya that either she return to Gweedore "with no particular definite future", or live with him and Roma in suburban Artane, in Dublin, "and see what happens, musically", which Enya decided to try. After their bank denied them a loan, Enya sold her saxophone and gave piano lessons as a source of income. Nicky Ryan used what they could afford to build a recording facility in the Ryans' garden shed, which they named "Aigle Studio", after the French word for eagle. Enya lived with the Ryans from 1982, shortly after leaving Clannad, until 1989, when she bought a penthouse apartment in Killiney. Enya and the Ryans rented Aigle Studio out to other musicians to help recoup the costs. The trio formed a musical and business partnership, with Nicky as Enya's producer and arranger and Roma as her lyricist. They called their company, of which each owns a third, Aigle Music. In the following two years, Enya developed her technique and composition by listening to recordings of her reciting pieces of classical music and repeated this process until she started to improvise sections and develop her own arrangements. In the early 1980s following her Clannad departure, Enya recorded with a few artists, often on keyboards or backing vocals, with Nicky Ryan as producer. She also played the synthesiser on the group Ragairne's Ceol Aduaidh, led by Mairéad Ní Mhaonaigh and Frankie Kennedy. Being one of the earlier choices to sing the song before Maggie Reilly, Enya declined an offer from Mike Oldfield to sing on his single "Moonlight Shadow", likely due to existing contracts. "Bailieboro and Me" is a Charlie McGettigan song with the group Jargon; an early recording features Enya singing backing vocals, primarily credited as Eithne Ní Bhraonáin playing the grand piano for the song. Enya's first solo endeavour was in 1982, when she composed and later released two piano instrumentals, "An Ghaoth Ón Ghrian" (Irish for "The Solar Wind") and "Miss Clare Remembers". Both were recorded at Windmill Lane Studios in Dublin and released on Touch Travel (1984), a limited-release cassette of music from various artists on the Touch label. She is credited as Eithne Ní Bhraonáin in the liner notes. After several months of preparation, Enya's first live solo performance took place at the National Stadium in Dublin on 23 September 1983, and was televised for RTÉ's music show Festival Folk. Niall Morris, a musician who worked with her during this time, recalled she "was so nervous she could barely get on stage, and she cowered behind the piano until the gig was over". Morris assisted Enya in the production of a demo tape, adding additional keyboards to her compositions. Roma thought the music would suit accompanying visuals and sent it to various film producers. Among them was David Puttnam, after Roma had read an interview where he stated a particular interest in strong melodies. Puttnam liked the tape and offered Enya to compose the soundtrack to his upcoming romantic comedy film, The Frog Prince (1984). Enya scored nine pieces for the film; later, against her wishes, the pieces were rearranged and orchestrated by Richard Myhill, except for two pieces in which she sang, "The Frog Prince" and "Dreams". The words to "Dreams" were penned by Charlie McGettigan. The film editor Jim Clark said the rearrangements were necessary as Enya found it difficult to compose to the picture. Released in 1985, the album is the first commercial release that credits her as "Enya". Nicky Ryan suggested the phonetic spelling of her name, thinking that Eithne would be mispronounced by non-Irish speakers. Enya looked back at her composition work on the film as a good career move, but a disappointing one as "we weren't part of it at the end". Also in 1985, she sang on three tracks on Ordinary Man (1985) by Christy Moore. In 1985, producer Tony McAuley asked Enya to contribute a track for the six-part BBC television documentary series The Celts. She had already written a Celtic-influenced song called "The March of the Celts", and submitted it to the project. Each episode was to feature a different composer at first, but director David Richardson liked her track so much that he had Enya score the entire series. Enya recorded 72 minutes of music at Aigle Studio and the BBC studios in Wood Lane, London, without recording to the picture. She was required to portray certain themes and ideas that the producers wanted; but unlike The Frog Prince, she worked with little interference which granted her freedom to establish the sound that she would adopt throughout her future career, signified by layered vocals, keyboard-oriented music, and percussion with elements of Celtic, classical, church, and folk music. In March 1987, two months before The Celts aired, a 40-minute selection of Enya's score was released as her debut solo album, Enya, by BBC Records in the United Kingdom and by Atlantic Records in the United States. The latter promoted it with a new-age imprint on the packaging, which Nicky later thought was "a cowardly thing for them to do". The album gained enough public attention to reach number 8 on the Irish Albums Chart and number 69 on the UK Albums Chart. "I Want Tomorrow" was released as Enya's first single. "Boadicea" was later sampled by The Fugees on their 1996 song "Ready or Not"; the group neither sought permission nor gave credit. Enya took legal action and the group subsequently gave her credit; they paid a fee of approximately $3 million. Later in 1987, Enya appeared on Sinéad O'Connor's debut album The Lion and the Cobra, reciting Psalm 91 in Irish on "Never Get Old". Several weeks after the release of Enya, Enya secured a recording contract with Warner Music UK after Rob Dickins, the label's chairman and a fan of Clannad, took a liking to Enya and found himself playing it "every night before I went to bed". He later met Enya and the Ryans at a chance meeting at the Irish Recorded Music Association award ceremony in Dublin, where he learned that Enya had entered negotiations with a rival label. Dickins seized the opportunity and signed her, in doing so granting her wish to write and record with artistic freedom, minimal interference from the label, and without set deadlines to finish albums. Dickins said: "Sometimes you sign an act to make money, and sometimes you sign an act to make music. This was the latter... I just wanted to be involved with this music." Enya left Atlantic and signed with the Warner-led Geffen Records to handle her American distribution. When asked about whether women in pop have a hard time, she responded "yes, they do. Definitely." However, Enya has considered her position as a composer rather than just a vocalist to be an advantage "because I write and perform much of the music, I'm taken more seriously than the girls who just walk into a studio, do a vocal and that's it. I can't even imagine what that would be like." With the green light to produce a new album, Enya recorded Watermark from June 1987 to April 1988. It was initially recorded in analogue at Aigle before Dickins requested to have it re-recorded digitally at Orinoco Studios in Bermondsey, London. Watermark was released in September 1988 and became an unexpected hit, reaching number 5 in the United Kingdom and number 25 on the Billboard 200 in the United States following its release there in January 1989. Its lead single, "Orinoco Flow", was the last song written for the album. It was not intended to be a single at first, but Enya and the Ryans chose it after Dickins jokingly asked for a single; he knew that Enya's music was not made for the Top 40 chart. Dickins and engineer Ross Cullum are referenced in the song's lyrics. "Orinoco Flow" became an international top 10 hit and was number one in the United Kingdom for three weeks. The new-found success propelled Enya to international fame and she received endorsement deals and offers to use her music in television commercials. She spent a year traveling worldwide to promote the album which increased her exposure through interviews, appearances, and live performances. After promoting Watermark, Enya purchased new recording equipment and started work on her next album, Shepherd Moons. She found that the success of Watermark caused a considerable amount of pressure when it came to writing new songs, stating, "I kept thinking, 'Would this have gone on Watermark? Is it as good?' Eventually I had to forget about this and start on a blank canvas and just really go with what felt right". Enya wrote songs based on several ideas, including entries from her diary, the Blitz in London, and her grandparents. Shepherd Moons was released in November 1991, her first album released under Warner-led Reprise Records in the United States. It became a greater commercial success than Watermark, reaching number one in the UK for one week and number 17 in the United States. "Caribbean Blue", its lead single, charted at number 13 in the United Kingdom. In 1991, Warner Music released a collection of five Enya music videos as Moonshadows for home video. In 1993 Enya won her first Grammy Award in the Best New Age Album category for Shepherd Moons. Soon after, Enya and Nicky entered discussions with Industrial Light & Magic, founded by George Lucas, regarding an elaborate stage lighting system for a proposed concert tour, but nothing resulted from those discussions. In November 1992, Warner obtained the rights to Enya and re-released the album as The Celts with new artwork. It surpassed its initial sale performance, reaching number 10 in the UK. After travelling worldwide to promote Shepherd Moons, Enya started to write and record her fourth album, The Memory of Trees. By this time, the Ryans had moved to the southern Dublin suburb of Killiney, and a new Aigle Studio had been built alongside their home, with new recording facilities which eliminated the need to go to London to finish recording for the album. The new album was released in November 1995 and peaked at number 5 in the UK and number 9 in the US, where it sold over 3 million copies. Its lead single, "Anywhere Is", reached number 7 in the UK. The second, "On My Way Home", reached number 26 in the UK. In late 1994, Enya put out an extended play of Christmas music titled The Christmas EP. Enya was offered the opportunity to compose the film score for Titanic but declined as it would be a collaboration, rather than solely her composition. A recording of her singing "Oíche Chiúin", an Irish-language version of "Silent Night", appeared on the charity album A Very Special Christmas 3, released in benefit of the Special Olympics in October 1997. In early 1997, Enya began to select tracks for her first compilation album, "trying to select the obvious ones, the hits, and others." She chose to work on the collection following the promotional tour for The Memory of Trees as she felt it was the right time in her career, and that her contract with WEA required her to release a "best of" album. The set, named Paint the Sky with Stars: The Best of Enya, features two new tracks, "Paint the Sky with Stars" and "Only If...". Released in November 1997, the album was a worldwide commercial success, reaching number 4 in the UK and number 30 in the US, where it went on to sell over 4 million copies. "Only If..." was released as a single in 1997. Enya described the album as "like a musical diary... each melody has a little story and I live through that whole story from the beginning... your mind goes back to that day and what you were thinking." Enya started work on her fifth studio album, titled A Day Without Rain, in mid-1998. In a departure from her previous albums, she incorporated the use of a string section into her compositions, something that was not a conscious decision at first, but Enya and Nicky Ryan agreed that it complemented the songs that were being written. The album was released in November 2000 and reached number 6 in the UK and an initial peak of number 17 in the US. In the aftermath of the 11 September attacks, US sales of the album and its lead single "Only Time" surged after the song was widely used during radio and television coverage of the events, leading to its description as "a post-September 11th anthem". The exposure caused A Day Without Rain to outperform its original chart performance to peak at number 2 on the Billboard 200, and the release of a maxi-single containing the original and a pop remix of "Only Time" in November 2001. Enya donated its proceeds in aid of the International Association of Firefighters. The song topped the Billboard Hot Adult Contemporary Tracks chart and went to number 10 on the Hot 100 singles, Enya's highest charting US single to date. In 2001, Enya agreed to write and perform on two tracks for the soundtrack of The Lord of the Rings: The Fellowship of the Ring (2001) at the request of director Peter Jackson. Its composer Howard Shore "imagined her voice" as he wrote the film's score, making an uncommon exception to include another artist in one of his soundtracks. After flying to New Zealand to observe the filming and to watch a rough cut of the film, Enya returned to Ireland and composed "Aníron" (the theme for Aragorn and Arwen), with lyrics by Roma in J. R. R. Tolkien's fictional Elvish language Sindarin, and "May It Be", sung in English and another Tolkien language, Quenya. Shore then based his orchestrations around Enya's recorded vocals and themes to create "a seamless sound". In 2002, Enya released "May It Be" as a single which earned her an Academy Award nomination for Best Original Song. She performed the song live with an orchestra at the 74th Academy Awards ceremony in March 2002, and later cited the moment as a career highlight. Enya undertook additional studio projects in 2001 and 2002. The first was work on the soundtrack of the Japanese romantic film Calmi Cuori Appassionati (2001), which was subsequently released as Themes from Calmi Cuori Appassionati (2001). This release is formed of tracks spanning her career from Enya to A Day Without Rain with two B-sides. The album went to number 2 in Japan and became Enya's second album to sell one million copies in the country. In 2004, Enya had another significant "Boadicea" sampling request from Diddy, for the song "I Don't Wanna Know" performed by Mario Winans. She said that the producer "phoned the studio we were working in and Nicky took the call and he [Diddy] just said he had this fantastic singer that he was working with and it was Mario Winans. Immediately we said “send the song” and it was a great song." In September 2003, Enya returned to Aigle Studio to start work on her sixth studio album, Amarantine. Roma said the title means "everlasting". The album marks the first instance of Enya singing in Loxian, a fictional language created by Roma that came about when Enya was working on "Water Shows the Hidden Heart". After numerous attempts to sing the song in English, Irish, and Latin, Roma suggested a new language based on some of the sounds Enya would sing along to when developing her songs. It was a success, and Enya sang "Less Than a Pearl" and "The River Sings" in the same way. Roma worked on the language further, creating a "culture and history" behind it surrounding the Loxian people who are on another planet, questioning the existence of life outside of Earth. "Sumiregusa (Wild Violet)" is sung in Japanese. Amarantine was a global success, reaching number 6 on the Billboard 200 and number 8 in the UK. It has sold over 1 million certified copies in the US, a considerable drop in sales in comparison to her previous albums. Enya dedicated the album to BBC producer Tony McAuley who had commissioned Enya to write the soundtrack to The Celts, following his death in 2003. The lead single, "Amarantine", was released in December 2005. Enya wrote music with a winter and Christmas theme for her seventh studio album, And Winter Came... Initially, she intended to make an album of seasonal songs and hymns set for a release in late 2007 but decided to produce a winter-themed album instead. The track "My! My! Time Flies!", a tribute to the late Irish guitarist Jimmy Faulkner, incorporates a guitar solo performed by Pat Farrell, the first guitar solo on an Enya album since "I Want Tomorrow" from Enya. The lyrics also include atypical pop-culture references, such as The Beatles' famous photo shoot for the cover of Abbey Road. Upon its release in November 2008, And Winter Came... reached number 6 in the UK and number 8 in the US and sold almost 3.5 million copies worldwide by 2011. After promoting And Winter Came..., Enya took an extended break from writing and recording music. She spent her time resting, visiting family in Australia, and renovating her new home in the south of France. In March 2009, her first four studio albums were reissued in Japan in the Super High Material CD format with bonus tracks. Her second compilation album, The Very Best of Enya, was released in November 2009 and featured songs from 1987 to 2008, including a previously unreleased version of "Aníron" and a DVD compiling most of her music videos to date. In 2012, Enya returned to the studio to record her eighth album, Dark Sky Island. Its name refers to the island of Sark, which became the first island to be designated a dark-sky preserve, and a series of poems on islands by Roma Ryan. In 2013, "Only Time" was used in the "Epic Split" advertisement by Volvo Trucks starring Jean-Claude Van Damme who does the splits while suspended between two lorries. Upon the album's release on 20 November 2015, Dark Sky Island went to number 4 in the UK, Enya's highest charting studio album there since Shepherd Moons went to number 1, and to number 8 in the US. A Deluxe Edition features three additional songs. Enya completed a promotional tour of the UK, Europe, the US, and Japan. During her visit to Japan, Enya performed "Orinoco Flow" and "Echoes in Rain" at the Universal Studios Japan Christmas show in Osaka. In December 2016, Enya appeared on the Irish television show Christmas Carols from Cork, marking her first Irish television appearance in over seven years. She sang "Adeste Fideles", "Oiche Chiúin", and "The Spirit of Christmas Past". Since late 2019 there has been a significant increase in activity from Enya's official social platforms online. There have been more official Enya posts on Facebook, Instagram and Twitter, updates to Enya tracks and playlists on Spotify, Apple Music and Amazon Music, as well as YouTube channel updates and new content. Several music videos on Enya's official YouTube channel have undergone 4K HD conversion since 2020. Numerous YouTube "watch party" videos and vinyl re-releases marking anniversaries of Enya's music albums and compilations have been released since. The first of these videos was in November 2020, posted on Enya's official YouTube channel to commemorate the 20th anniversary of A Day Without Rain. In addition to the individual tracks from the album, it included handwritten introductory messages from Enya and Roma Ryan, plus a closing message from Nicky Ryan. Some behind-the-scenes clips from the making of the music videos for Only Time and Wild Child, both directed by Graham Fink, were also included. For the Shepherd Moons 30th Anniversary Watch Party video in November 2021, Nicky Ryan's introductory message noted that during the COVID-19 pandemic, Aigle Studio underwent some renovations, with new recording equipment and instruments installed, and that with this done, Enya and the Ryans were eager to start working on new music. A 20th anniversary vinyl picture disc re-release of the May It Be single was also released in late 2021. Enya's music continues to be sampled or interpolated by many modern-day producers, particularly her 1986 humming song "Boadicea", in songs within the R&B or hip-hop genres. In 2022, for Metro Boomin and The Weeknd's song "Creepin'", Enya didn't approve of the song to be released under the working title "IDWK" (referring to the song I Don't Wanna Know) so Metro reportedly asked Enya to select song titles that she would be happy with, which included "Undecided," "Creepin'", "Don't Come Back to Me", "Better Off That Way" and "Wanna Let You Know". Metro said "Creepin''' was the one [...] It ended up being a blessing because it's the best name for it." In June 2023, Enya's 1997 limited compilation A Box of Dreams was re-issued on 6 vinyl LPs, featuring new liner notes. Nicky Ryan confirmed that they were working on a new album and the possibility of a book based on the trio's thoughts regarding the Oceans tracks was also mentioned. Enya's note, in Irish, read "Beidh muid ag teacht le chéile gan mhoile", which roughly translates to "We will meet again soon". On 19 September 2023, a watch party video for the 35th anniversary of Watermark was also presented. Alongside this, vinyl LPs of Watermark and a Dolby Atmos upmixed audio for Orinoco Flow were also released. Enya's vocal range has been described as mezzo-soprano. She has cited her musical foundations as "the classics", church music, and "Irish reels and jigs" with a particular interest in Sergei Rachmaninoff, a favourite composer of hers. She has an autographed picture of him in her home. Since 1982, she has recorded her music with Nicky Ryan as producer and arranger and his wife Roma Ryan as a lyricist. While in Clannad, Enya chose to work with Nicky as the two shared an interest in vocal harmonies, and Ryan, influenced by The Beach Boys and the "Wall of Sound" technique that Phil Spector pioneered, wanted to explore the idea of "the multi-vocals" for which her music became known. According to Enya, "Angeles" from Shepherd Moons has roughly 500 vocals recorded individually and layered. Enya performs all vocals and the majority of instruments in her songs, apart from guest musicians, playing percussion, guitar, violin, uilleann pipes, cornet, and double bass. Her early works, including Watermark, feature piano and numerous keyboard synthesisers including the Yamaha KX88 Master, Yamaha DX7, Oberheim Matrix, Kurzweil K250, Fairlight III E-mu Emulator II, Akai S900, PPG Wave Computer 360, Roland D-50 (famously used with the Pizzagogo patch in "Orinoco Flow"), and the Roland Juno-60, the latter a particular favourite of hers. Numerous critics and reviewers classify Enya's albums as new-age music and she has won four Grammy Awards in the category. However, Enya does not classify her music as part of the genre. When asked what genre she would classify her music, she replied "Enya". Nicky Ryan commented on the new age designation: "Initially it was fine, but it's really not new age. Enya plays a whole lot of instruments, not just keyboards. Her melodies are strong and she sings a lot. So I can't see a comparison." Enya also said in 1988 of New Age music "it's air, thin air" and noted its spineless nature, unlike her own music. In a later interview, Enya said that she "felt that title was given to any musician whom critics didn't know how to pigeonhole." Older artwork often inspires some of the visuals that accompany Enya's music. The 1991 music video for "Caribbean Blue", and the 1995 album cover artwork for The Memory of Trees both feature adapted works from artist Maxfield Parrish. In the 1996 music video for "On My Way Home", scenes of girls lighting paper lanterns to hang in flowery foliage were inspired by John Singer Sargent's painting Carnation, Lily, Lily, Rose. In addition to her native Irish, Enya has recorded songs in languages including English, French, Latin, Spanish, and Welsh. She has recorded music influenced by works from fantasy author J. R. R. Tolkien, including the instrumental "Lothlórien" from Shepherd Moons. For The Lord of the Rings: The Fellowship of the Ring, she sang "May It Be" in English and Tolkien's fictional language Quenya, and she sang "Aníron" in another of Tolkien's fictional languages, Sindarin. Amarantine and Dark Sky Island include songs sung in Loxian, a fictional language created by Roma Ryan, that has no official syntax. Its vocabulary was formed by Enya singing the song's notes to which Roma wrote their phonetic spelling. Enya adopted a composing and songwriting method that has deviated little throughout her career. At the start of the recording process for an album, she enters the studio, forgetting about her previous success, fame, and songs of hers that became hits. "If I did that", she said, "I'd have to call it a day". She then develops ideas on the piano, keeping note of any arrangement that can be worked on further. During her time writing, Enya works a five-day week, takes weekends off, and does not work on her music at home. With Irish as her first language, Enya initially records her songs in Irish as she can express "feeling much more directly" in Irish than in English. After some time, Enya presents her ideas to Nicky to discuss what pieces work best, while Roma works in parallel to devise lyrics for the songs. Enya considered "Fallen Embers" from A Day Without Rain a perfect time when the lyrics reflect how she felt while writing the song. In 2008, she newly discovered her tendency to write "two or three songs" during the winter months, work on the arrangements and lyrics the following spring and summer, and then work on the next couple of songs when autumn arrives. Enya says that Warner Music and she "did not see eye to eye" initially as the label imagined her performing on stage "with a piano... maybe two or three synthesizer players and that's it". Enya also explained that the time put into her studio albums caused her to "run overtime", leaving little time to plan for other such projects. She also expressed the difficulty in recreating her studio-oriented sound for the stage. In 1996, Ryan said Enya had received an offer worth almost £500,000 to perform a concert in Japan. In 2016, Enya spoke about the prospect of a live concert when she revealed talks with the Ryans during her three-year break after And Winter Came... (2008) to perform a show at the Metropolitan Opera House in New York City that would be simulcast to cinemas worldwide. Before such an event could happen, Nicky suggested that she enter a studio and record "all the hits" live with an orchestra and choir to see how they would sound. Enya has sung with live and lip-syncing vocals on various talk and music shows, events, and ceremonies throughout her career, most often during her worldwide press tours for each album. In December 1995, she performed "Anywhere Is" at a Christmas concert at Vatican City with Pope John Paul II in attendance; he later met and thanked her for performing. In April 1996, Enya performed the same song during her surprise appearance at the fiftieth birthday celebration for Carl XVI Gustaf, the king of Sweden and a fan of Enya's. In 1997, Enya participated in a live Christmas Eve broadcast in London and flew to County Donegal afterward to join her family for their annual midnight Mass choral performance, in which she participates each year. In March 2002, she performed "May It Be" with an orchestra at the year's Academy Awards ceremony. Enya and her sisters performed as part of the local choir Cor Mhuire in July 2005 at St. Mary's church in Gweedore during the annual Earagail Arts Festival. Known for her private lifestyle, Enya has said, "The music is what sells. Not me, or what I stand for... that's the way I've always wanted it." She is unmarried and childfree, but has many nieces and nephews and is considered an aunt to the Ryans' two daughters, having shared their Artane home for almost a decade. In 1991, she said, "I'm afraid of marriage because I'm afraid someone might want me because of who I am instead of because they loved me... I wouldn't go rushing into anything unexpected, but I do think a great deal about this." A relationship she had with one man ended in 1997, around the time when she considered taking time out of music to have a family, but found she was putting pressure on herself over the matter and "gone the route [she] wanted to go". At an auction in 1997, Enya spent £2.5 million on a 157-year-old Victorian listed castellated mansion in Killiney. Formerly known as Victoria Castle and Ayesha Castle, the house was renamed by Enya as Manderley Castle after the house featured in Daphne du Maurier's novel Rebecca (1938). She spent several years renovating the property and installing considerable security measures because of threats from stalkers. The improvements covered gaps in the house's outer wall, installed new solid timber entrance gates and 1.2-metre (4 ft) iron railings, and brought the surrounding 41 metres (135 ft) of stone wall up to a new height of 2.7 metres (9 ft). In late 2005, the property had two security breaches; during one incident, two people attacked and tied up one of her housekeepers before stealing several items. Enya alerted police by raising an alarm from her safe room. Enya oversaw most of the interior design (decorations and furnishings of her castle); she was "not going to trust that to anyone else". Enya is not known to express political opinions, although a translated quote from a Belgian interview in 1988 — "generations old potentates and politicians have reduced the whole nation to beggary" — provides a slight insight into her stance on political and social matters. She also admires the authors Oscar Wilde and J. R. R. Tolkien, in addition to the current Pope Francis. Enya has identified herself as "more spiritual than religious" and has said that she sometimes prays, but prefers "going into churches when they're empty". Aside from music, Enya has an appreciation for art, and as of 2000 was collecting artworks by Irish artists including Jack Butler Yeats and Louis le Brocquy and the British artist Albert Goodwin. Enya also enjoys watching operas, classic black-and-white films and crime drama series such as Breaking Bad, saying "myself, Nicky, and Roma are huge fans of Breaking Bad. We just didn't miss an episode." First editions of books are also something Enya enjoys collecting. The discography of Enya includes 26.5 million certified album sales in the United States and an estimated 80 million record sales worldwide, making her one of the best-selling musicians of all time. A Day Without Rain is the best-selling new-age album, with an estimated 16 million copies sold worldwide. In the United Kingdom and Ireland, Enya's most successful single in the charts was "Orinoco Flow (Sail Away)", reaching number 1 on 23 October 1988, holding the top placing for three consecutive weeks. In 1991, Enya's album Shepherd Moons entered the charts at number 1 on 16 November 1991. Enya's awards include seven World Music Awards, four Grammy Awards for Best New Age Album, and an Ivor Novello Award. She was nominated for an Academy Award and a Golden Globe Award for "May It Be", a song she wrote for the film The Lord of the Rings: The Fellowship of the Ring (2001). Studio albums Billboard Music Awards Grammy Awards IFPI Hong Kong Top Sales Music Awards World Music Awards Žebřík Music Awards Other awards In 1991, a minor planet first discovered in 1978, 6433 Enya, was named after her. In June 2007, she received an honorary doctorate from the National University of Ireland, Galway, for her contributions to music. A month later, she also received an honorary DLitt from the University of Ulster. In 2017, a newly discovered species of fish, Leporinus enyae, found in the Orinoco River drainage area, was named after Enya, in reference to her song, "Orinoco Flow". Sources
[ { "paragraph_id": 0, "text": "Eithne Pádraigín Ní Bhraonáin (born 17 May 1961), known mononymously as Enya, is an Irish singer and composer. Noted for her modern Celtic music, she is the best-selling Irish solo artist and the second-best-selling Irish music act overall, after rock band U2.", "title": "" }, { "paragraph_id": 1, "text": "Enya was raised in the Irish-speaking region of Gweedore. In 1980, Enya (as Eithne Ní Bhraonáin) began her musical career playing alongside her family's Celtic folk band Clannad. She left Clannad in 1982 to pursue a solo career, working with the former Clannad manager and producer, Nicky Ryan, and his partner Roma, as their lyricist. Over the following four years, Enya developed her sound by combining multitracked vocals and keyboards with elements from a variety of musical genres such as Celtic, classical, church, jazz, new age, world, pop, and Irish folk.", "title": "" }, { "paragraph_id": 2, "text": "The two earliest releases by Enya were instrumentals for the Touch Travel (1984) cassette compilation. She composed most of the soundtrack and sang two songs intended for The Frog Prince (1985), and a body of work for the BBC documentary series The Celts (1986). A selection of her pieces for The Celts were released as her debut album, Enya (1987). She later signed with Warner Music UK, which granted her considerable artistic freedom and minimal interference. The success of Watermark (1988) propelled Enya to worldwide fame, helped mostly by the international hit single \"Orinoco Flow (Sail Away)\". This was followed by the multi-million-selling albums Shepherd Moons (1991), The Memory of Trees (1995), and A Day Without Rain (2000). Sales of A Day Without Rain and its lead single, \"Only Time\", surged in the United States following its use in media coverage of the 9/11 attacks. After Amarantine (2005) and And Winter Came... (2008), Enya took a four-year break from music, returning in 2012 to begin work on her eighth studio album Dark Sky Island (2015). According to her sister Moya, Enya was recording music as of 2019.", "title": "" }, { "paragraph_id": 3, "text": "Eithne Pádraigín Ní Bhraonáin was born in the Dore area of Gweedore, County Donegal, on 17 May 1961, the sixth of nine children in the Brennan family of musicians, born to Máire \"Baba\" and Leopold \"Leo\" Brennan. In 1968, the couple took ownership of a pub in Meenaleck, Co. Donegal, naming it Leo's Tavern. Leo Brennan (1925-2016) was the leader of an Irish showband named the Slieve Foy Band, before performing solo. Baba Brennan (née Duggan; born 1930) has remote Spanish roots with ancestors who settled on Tory Island and she was an amateur musician who played with the Slieve Foy Band. Enya's mother also taught music at Gweedore Community School.", "title": "Early life" }, { "paragraph_id": 4, "text": "Enya grew up in Gweedore, a region where Irish is the primary language. Her name is anglicised as Enya Patricia Brennan, with \"Enya\" being the phonetic spelling of how \"Eithne\" is pronounced in her native Ulster dialect. \"Ní Bhraonáin\" translates to \"daughter of Brennan\". Enya's maternal grandfather Aodh, was the headmaster of the primary school in Dore where her grandmother was a teacher. Aodh was also the founder of the Gweedore Theatre company. Enya has described her upbringing as \"very quiet and happy.\"", "title": "Early life" }, { "paragraph_id": 5, "text": "At three-and-a-half years of age, she took part in her first singing competition at the annual Feis Ceoil music festival. Enya also participated in pantomimes at Gweedore Theatre and sang with her siblings in their mother's choir at St Mary's church in Derrybeg. At the age of four, she began piano lessons and was learning English throughout primary school. She later said, \"I had to do school work and then travel to a neighbouring town for piano lessons, and then more school work. I remember my brothers and sisters playing outside and I would be inside playing the piano, this one big book of scales, practising them over and over.\"", "title": "Early life" }, { "paragraph_id": 6, "text": "As well as traditional Irish music, Enya and her siblings were introduced to a variety of music in the 60s and 70s, and enjoyed watching musical films. In a radio interview with Elaine Page in November 2008, Enya shared a selection of favourite songs from musicals. She said of Jesus Christ Superstar, \"it was such an original piece of music in 1970 [...] played in my house every single day, and myself and my sisters would sing word for word\".", "title": "Early life" }, { "paragraph_id": 7, "text": "From the age of 11, Enya attended a convent boarding school, Milford College, in Milford run by the Sisters of Loreto; her education there was paid for by her grandfather. The boarding school, now Loreto Community School, was where Enya developed a taste for classical music, art, Latin, and watercolour painting. She said, \"It was devastating to be torn away from such a large family but it was good for my music.\" Enya finished boarding school at age 17 in the late 1970s, and studied classical music at college the following year, with the original intention to be teaching the piano, rather than composing and performing her own music.", "title": "Early life" }, { "paragraph_id": 8, "text": "In 1970, several members of Enya's family formed Clannad, a Celtic folk band. Clannad hired Nicky Ryan as their manager, sound engineer, and producer, and Ryan's future wife, Roma Shane, as tour manager and administrator. In 1980, after a year at college, Enya decided not to pursue a music degree and instead accepted Ryan's invitation to join Clannad, having wanted to expand their sound with keyboards and an additional vocalist. Enya performed an uncredited role on their sixth studio album, Crann Úll (1980), with a line-up of elder siblings Máire, Pól, and Ciarán Brennan, and twin uncles Noel and Pádraig Duggan. She features in their follow-up, Fuaim (1981), singing the song An Túll. From Ciarán's perspective, Enya was a \"hired hand\" and not a full member, commenting that \"she was 18, 19 and we were paying her £500 sterling a week.\" Nicky said it was not his intention to make Enya a permanent member, as she was \"fiercely independent [...] intent on playing her own music. She was just not sure of how to go about it.\" Nicky discussed the idea of layering vocals to create a \"choir of one\" with Enya, a concept inspired by Phil Spector's Wall of Sound technique that had interested them both.", "title": "Career" }, { "paragraph_id": 9, "text": "During a Clannad tour in 1982, Nicky called for a band meeting to address internal issues that had arisen, primarily around the drinking of one or two members. He recalled: \"It was short and only required a vote, I was a minority of one and lost. Roma and I were out. This left the question of what happened with Enya. I decided to stand back and say nothing.\" Enya chose to leave with the Ryans and pursue a solo career, having felt confined in the group and disliking being \"somebody in the background\". The split caused some friction between the parties, but in time, they settled their differences. Enya's brother Ciarán also spoke to Nicky Ryan around 2006, interested in recording in their studio with her, but Ryan suggested that this was unlikely to occur.", "title": "Career" }, { "paragraph_id": 10, "text": "Nicky suggested to Enya that either she return to Gweedore \"with no particular definite future\", or live with him and Roma in suburban Artane, in Dublin, \"and see what happens, musically\", which Enya decided to try. After their bank denied them a loan, Enya sold her saxophone and gave piano lessons as a source of income. Nicky Ryan used what they could afford to build a recording facility in the Ryans' garden shed, which they named \"Aigle Studio\", after the French word for eagle. Enya lived with the Ryans from 1982, shortly after leaving Clannad, until 1989, when she bought a penthouse apartment in Killiney.", "title": "Career" }, { "paragraph_id": 11, "text": "Enya and the Ryans rented Aigle Studio out to other musicians to help recoup the costs. The trio formed a musical and business partnership, with Nicky as Enya's producer and arranger and Roma as her lyricist. They called their company, of which each owns a third, Aigle Music. In the following two years, Enya developed her technique and composition by listening to recordings of her reciting pieces of classical music and repeated this process until she started to improvise sections and develop her own arrangements.", "title": "Career" }, { "paragraph_id": 12, "text": "In the early 1980s following her Clannad departure, Enya recorded with a few artists, often on keyboards or backing vocals, with Nicky Ryan as producer. She also played the synthesiser on the group Ragairne's Ceol Aduaidh, led by Mairéad Ní Mhaonaigh and Frankie Kennedy. Being one of the earlier choices to sing the song before Maggie Reilly, Enya declined an offer from Mike Oldfield to sing on his single \"Moonlight Shadow\", likely due to existing contracts. \"Bailieboro and Me\" is a Charlie McGettigan song with the group Jargon; an early recording features Enya singing backing vocals, primarily credited as Eithne Ní Bhraonáin playing the grand piano for the song.", "title": "Career" }, { "paragraph_id": 13, "text": "Enya's first solo endeavour was in 1982, when she composed and later released two piano instrumentals, \"An Ghaoth Ón Ghrian\" (Irish for \"The Solar Wind\") and \"Miss Clare Remembers\". Both were recorded at Windmill Lane Studios in Dublin and released on Touch Travel (1984), a limited-release cassette of music from various artists on the Touch label. She is credited as Eithne Ní Bhraonáin in the liner notes. After several months of preparation, Enya's first live solo performance took place at the National Stadium in Dublin on 23 September 1983, and was televised for RTÉ's music show Festival Folk. Niall Morris, a musician who worked with her during this time, recalled she \"was so nervous she could barely get on stage, and she cowered behind the piano until the gig was over\".", "title": "Career" }, { "paragraph_id": 14, "text": "Morris assisted Enya in the production of a demo tape, adding additional keyboards to her compositions. Roma thought the music would suit accompanying visuals and sent it to various film producers. Among them was David Puttnam, after Roma had read an interview where he stated a particular interest in strong melodies. Puttnam liked the tape and offered Enya to compose the soundtrack to his upcoming romantic comedy film, The Frog Prince (1984). Enya scored nine pieces for the film; later, against her wishes, the pieces were rearranged and orchestrated by Richard Myhill, except for two pieces in which she sang, \"The Frog Prince\" and \"Dreams\". The words to \"Dreams\" were penned by Charlie McGettigan. The film editor Jim Clark said the rearrangements were necessary as Enya found it difficult to compose to the picture. Released in 1985, the album is the first commercial release that credits her as \"Enya\". Nicky Ryan suggested the phonetic spelling of her name, thinking that Eithne would be mispronounced by non-Irish speakers. Enya looked back at her composition work on the film as a good career move, but a disappointing one as \"we weren't part of it at the end\". Also in 1985, she sang on three tracks on Ordinary Man (1985) by Christy Moore.", "title": "Career" }, { "paragraph_id": 15, "text": "In 1985, producer Tony McAuley asked Enya to contribute a track for the six-part BBC television documentary series The Celts. She had already written a Celtic-influenced song called \"The March of the Celts\", and submitted it to the project. Each episode was to feature a different composer at first, but director David Richardson liked her track so much that he had Enya score the entire series. Enya recorded 72 minutes of music at Aigle Studio and the BBC studios in Wood Lane, London, without recording to the picture. She was required to portray certain themes and ideas that the producers wanted; but unlike The Frog Prince, she worked with little interference which granted her freedom to establish the sound that she would adopt throughout her future career, signified by layered vocals, keyboard-oriented music, and percussion with elements of Celtic, classical, church, and folk music.", "title": "Career" }, { "paragraph_id": 16, "text": "In March 1987, two months before The Celts aired, a 40-minute selection of Enya's score was released as her debut solo album, Enya, by BBC Records in the United Kingdom and by Atlantic Records in the United States. The latter promoted it with a new-age imprint on the packaging, which Nicky later thought was \"a cowardly thing for them to do\". The album gained enough public attention to reach number 8 on the Irish Albums Chart and number 69 on the UK Albums Chart. \"I Want Tomorrow\" was released as Enya's first single. \"Boadicea\" was later sampled by The Fugees on their 1996 song \"Ready or Not\"; the group neither sought permission nor gave credit. Enya took legal action and the group subsequently gave her credit; they paid a fee of approximately $3 million. Later in 1987, Enya appeared on Sinéad O'Connor's debut album The Lion and the Cobra, reciting Psalm 91 in Irish on \"Never Get Old\".", "title": "Career" }, { "paragraph_id": 17, "text": "Several weeks after the release of Enya, Enya secured a recording contract with Warner Music UK after Rob Dickins, the label's chairman and a fan of Clannad, took a liking to Enya and found himself playing it \"every night before I went to bed\". He later met Enya and the Ryans at a chance meeting at the Irish Recorded Music Association award ceremony in Dublin, where he learned that Enya had entered negotiations with a rival label. Dickins seized the opportunity and signed her, in doing so granting her wish to write and record with artistic freedom, minimal interference from the label, and without set deadlines to finish albums. Dickins said: \"Sometimes you sign an act to make money, and sometimes you sign an act to make music. This was the latter... I just wanted to be involved with this music.\" Enya left Atlantic and signed with the Warner-led Geffen Records to handle her American distribution.", "title": "Career" }, { "paragraph_id": 18, "text": "When asked about whether women in pop have a hard time, she responded \"yes, they do. Definitely.\" However, Enya has considered her position as a composer rather than just a vocalist to be an advantage \"because I write and perform much of the music, I'm taken more seriously than the girls who just walk into a studio, do a vocal and that's it. I can't even imagine what that would be like.\"", "title": "Career" }, { "paragraph_id": 19, "text": "With the green light to produce a new album, Enya recorded Watermark from June 1987 to April 1988. It was initially recorded in analogue at Aigle before Dickins requested to have it re-recorded digitally at Orinoco Studios in Bermondsey, London.", "title": "Career" }, { "paragraph_id": 20, "text": "Watermark was released in September 1988 and became an unexpected hit, reaching number 5 in the United Kingdom and number 25 on the Billboard 200 in the United States following its release there in January 1989. Its lead single, \"Orinoco Flow\", was the last song written for the album. It was not intended to be a single at first, but Enya and the Ryans chose it after Dickins jokingly asked for a single; he knew that Enya's music was not made for the Top 40 chart. Dickins and engineer Ross Cullum are referenced in the song's lyrics. \"Orinoco Flow\" became an international top 10 hit and was number one in the United Kingdom for three weeks. The new-found success propelled Enya to international fame and she received endorsement deals and offers to use her music in television commercials. She spent a year traveling worldwide to promote the album which increased her exposure through interviews, appearances, and live performances.", "title": "Career" }, { "paragraph_id": 21, "text": "After promoting Watermark, Enya purchased new recording equipment and started work on her next album, Shepherd Moons. She found that the success of Watermark caused a considerable amount of pressure when it came to writing new songs, stating, \"I kept thinking, 'Would this have gone on Watermark? Is it as good?' Eventually I had to forget about this and start on a blank canvas and just really go with what felt right\".", "title": "Career" }, { "paragraph_id": 22, "text": "Enya wrote songs based on several ideas, including entries from her diary, the Blitz in London, and her grandparents. Shepherd Moons was released in November 1991, her first album released under Warner-led Reprise Records in the United States. It became a greater commercial success than Watermark, reaching number one in the UK for one week and number 17 in the United States. \"Caribbean Blue\", its lead single, charted at number 13 in the United Kingdom.", "title": "Career" }, { "paragraph_id": 23, "text": "In 1991, Warner Music released a collection of five Enya music videos as Moonshadows for home video. In 1993 Enya won her first Grammy Award in the Best New Age Album category for Shepherd Moons. Soon after, Enya and Nicky entered discussions with Industrial Light & Magic, founded by George Lucas, regarding an elaborate stage lighting system for a proposed concert tour, but nothing resulted from those discussions. In November 1992, Warner obtained the rights to Enya and re-released the album as The Celts with new artwork. It surpassed its initial sale performance, reaching number 10 in the UK.", "title": "Career" }, { "paragraph_id": 24, "text": "After travelling worldwide to promote Shepherd Moons, Enya started to write and record her fourth album, The Memory of Trees.", "title": "Career" }, { "paragraph_id": 25, "text": "By this time, the Ryans had moved to the southern Dublin suburb of Killiney, and a new Aigle Studio had been built alongside their home, with new recording facilities which eliminated the need to go to London to finish recording for the album. The new album was released in November 1995 and peaked at number 5 in the UK and number 9 in the US, where it sold over 3 million copies. Its lead single, \"Anywhere Is\", reached number 7 in the UK. The second, \"On My Way Home\", reached number 26 in the UK. In late 1994, Enya put out an extended play of Christmas music titled The Christmas EP. Enya was offered the opportunity to compose the film score for Titanic but declined as it would be a collaboration, rather than solely her composition. A recording of her singing \"Oíche Chiúin\", an Irish-language version of \"Silent Night\", appeared on the charity album A Very Special Christmas 3, released in benefit of the Special Olympics in October 1997.", "title": "Career" }, { "paragraph_id": 26, "text": "In early 1997, Enya began to select tracks for her first compilation album, \"trying to select the obvious ones, the hits, and others.\" She chose to work on the collection following the promotional tour for The Memory of Trees as she felt it was the right time in her career, and that her contract with WEA required her to release a \"best of\" album. The set, named Paint the Sky with Stars: The Best of Enya, features two new tracks, \"Paint the Sky with Stars\" and \"Only If...\". Released in November 1997, the album was a worldwide commercial success, reaching number 4 in the UK and number 30 in the US, where it went on to sell over 4 million copies.", "title": "Career" }, { "paragraph_id": 27, "text": "\"Only If...\" was released as a single in 1997. Enya described the album as \"like a musical diary... each melody has a little story and I live through that whole story from the beginning... your mind goes back to that day and what you were thinking.\"", "title": "Career" }, { "paragraph_id": 28, "text": "Enya started work on her fifth studio album, titled A Day Without Rain, in mid-1998. In a departure from her previous albums, she incorporated the use of a string section into her compositions, something that was not a conscious decision at first, but Enya and Nicky Ryan agreed that it complemented the songs that were being written. The album was released in November 2000 and reached number 6 in the UK and an initial peak of number 17 in the US.", "title": "Career" }, { "paragraph_id": 29, "text": "In the aftermath of the 11 September attacks, US sales of the album and its lead single \"Only Time\" surged after the song was widely used during radio and television coverage of the events, leading to its description as \"a post-September 11th anthem\". The exposure caused A Day Without Rain to outperform its original chart performance to peak at number 2 on the Billboard 200, and the release of a maxi-single containing the original and a pop remix of \"Only Time\" in November 2001. Enya donated its proceeds in aid of the International Association of Firefighters. The song topped the Billboard Hot Adult Contemporary Tracks chart and went to number 10 on the Hot 100 singles, Enya's highest charting US single to date.", "title": "Career" }, { "paragraph_id": 30, "text": "In 2001, Enya agreed to write and perform on two tracks for the soundtrack of The Lord of the Rings: The Fellowship of the Ring (2001) at the request of director Peter Jackson. Its composer Howard Shore \"imagined her voice\" as he wrote the film's score, making an uncommon exception to include another artist in one of his soundtracks. After flying to New Zealand to observe the filming and to watch a rough cut of the film, Enya returned to Ireland and composed \"Aníron\" (the theme for Aragorn and Arwen), with lyrics by Roma in J. R. R. Tolkien's fictional Elvish language Sindarin, and \"May It Be\", sung in English and another Tolkien language, Quenya. Shore then based his orchestrations around Enya's recorded vocals and themes to create \"a seamless sound\".", "title": "Career" }, { "paragraph_id": 31, "text": "In 2002, Enya released \"May It Be\" as a single which earned her an Academy Award nomination for Best Original Song. She performed the song live with an orchestra at the 74th Academy Awards ceremony in March 2002, and later cited the moment as a career highlight.", "title": "Career" }, { "paragraph_id": 32, "text": "Enya undertook additional studio projects in 2001 and 2002. The first was work on the soundtrack of the Japanese romantic film Calmi Cuori Appassionati (2001), which was subsequently released as Themes from Calmi Cuori Appassionati (2001).", "title": "Career" }, { "paragraph_id": 33, "text": "This release is formed of tracks spanning her career from Enya to A Day Without Rain with two B-sides. The album went to number 2 in Japan and became Enya's second album to sell one million copies in the country.", "title": "Career" }, { "paragraph_id": 34, "text": "In 2004, Enya had another significant \"Boadicea\" sampling request from Diddy, for the song \"I Don't Wanna Know\" performed by Mario Winans. She said that the producer \"phoned the studio we were working in and Nicky took the call and he [Diddy] just said he had this fantastic singer that he was working with and it was Mario Winans. Immediately we said “send the song” and it was a great song.\"", "title": "Career" }, { "paragraph_id": 35, "text": "In September 2003, Enya returned to Aigle Studio to start work on her sixth studio album, Amarantine. Roma said the title means \"everlasting\". The album marks the first instance of Enya singing in Loxian, a fictional language created by Roma that came about when Enya was working on \"Water Shows the Hidden Heart\". After numerous attempts to sing the song in English, Irish, and Latin, Roma suggested a new language based on some of the sounds Enya would sing along to when developing her songs. It was a success, and Enya sang \"Less Than a Pearl\" and \"The River Sings\" in the same way. Roma worked on the language further, creating a \"culture and history\" behind it surrounding the Loxian people who are on another planet, questioning the existence of life outside of Earth. \"Sumiregusa (Wild Violet)\" is sung in Japanese. Amarantine was a global success, reaching number 6 on the Billboard 200 and number 8 in the UK. It has sold over 1 million certified copies in the US, a considerable drop in sales in comparison to her previous albums. Enya dedicated the album to BBC producer Tony McAuley who had commissioned Enya to write the soundtrack to The Celts, following his death in 2003. The lead single, \"Amarantine\", was released in December 2005.", "title": "Career" }, { "paragraph_id": 36, "text": "Enya wrote music with a winter and Christmas theme for her seventh studio album, And Winter Came... Initially, she intended to make an album of seasonal songs and hymns set for a release in late 2007 but decided to produce a winter-themed album instead.", "title": "Career" }, { "paragraph_id": 37, "text": "The track \"My! My! Time Flies!\", a tribute to the late Irish guitarist Jimmy Faulkner, incorporates a guitar solo performed by Pat Farrell, the first guitar solo on an Enya album since \"I Want Tomorrow\" from Enya. The lyrics also include atypical pop-culture references, such as The Beatles' famous photo shoot for the cover of Abbey Road. Upon its release in November 2008, And Winter Came... reached number 6 in the UK and number 8 in the US and sold almost 3.5 million copies worldwide by 2011.", "title": "Career" }, { "paragraph_id": 38, "text": "After promoting And Winter Came..., Enya took an extended break from writing and recording music. She spent her time resting, visiting family in Australia, and renovating her new home in the south of France. In March 2009, her first four studio albums were reissued in Japan in the Super High Material CD format with bonus tracks. Her second compilation album, The Very Best of Enya, was released in November 2009 and featured songs from 1987 to 2008, including a previously unreleased version of \"Aníron\" and a DVD compiling most of her music videos to date.", "title": "Career" }, { "paragraph_id": 39, "text": "In 2012, Enya returned to the studio to record her eighth album, Dark Sky Island. Its name refers to the island of Sark, which became the first island to be designated a dark-sky preserve, and a series of poems on islands by Roma Ryan.", "title": "Career" }, { "paragraph_id": 40, "text": "In 2013, \"Only Time\" was used in the \"Epic Split\" advertisement by Volvo Trucks starring Jean-Claude Van Damme who does the splits while suspended between two lorries.", "title": "Career" }, { "paragraph_id": 41, "text": "Upon the album's release on 20 November 2015, Dark Sky Island went to number 4 in the UK, Enya's highest charting studio album there since Shepherd Moons went to number 1, and to number 8 in the US. A Deluxe Edition features three additional songs. Enya completed a promotional tour of the UK, Europe, the US, and Japan. During her visit to Japan, Enya performed \"Orinoco Flow\" and \"Echoes in Rain\" at the Universal Studios Japan Christmas show in Osaka. In December 2016, Enya appeared on the Irish television show Christmas Carols from Cork, marking her first Irish television appearance in over seven years. She sang \"Adeste Fideles\", \"Oiche Chiúin\", and \"The Spirit of Christmas Past\".", "title": "Career" }, { "paragraph_id": 42, "text": "Since late 2019 there has been a significant increase in activity from Enya's official social platforms online. There have been more official Enya posts on Facebook, Instagram and Twitter, updates to Enya tracks and playlists on Spotify, Apple Music and Amazon Music, as well as YouTube channel updates and new content. Several music videos on Enya's official YouTube channel have undergone 4K HD conversion since 2020. Numerous YouTube \"watch party\" videos and vinyl re-releases marking anniversaries of Enya's music albums and compilations have been released since.", "title": "Career" }, { "paragraph_id": 43, "text": "The first of these videos was in November 2020, posted on Enya's official YouTube channel to commemorate the 20th anniversary of A Day Without Rain. In addition to the individual tracks from the album, it included handwritten introductory messages from Enya and Roma Ryan, plus a closing message from Nicky Ryan. Some behind-the-scenes clips from the making of the music videos for Only Time and Wild Child, both directed by Graham Fink, were also included.", "title": "Career" }, { "paragraph_id": 44, "text": "For the Shepherd Moons 30th Anniversary Watch Party video in November 2021, Nicky Ryan's introductory message noted that during the COVID-19 pandemic, Aigle Studio underwent some renovations, with new recording equipment and instruments installed, and that with this done, Enya and the Ryans were eager to start working on new music.", "title": "Career" }, { "paragraph_id": 45, "text": "A 20th anniversary vinyl picture disc re-release of the May It Be single was also released in late 2021.", "title": "Career" }, { "paragraph_id": 46, "text": "Enya's music continues to be sampled or interpolated by many modern-day producers, particularly her 1986 humming song \"Boadicea\", in songs within the R&B or hip-hop genres.", "title": "Career" }, { "paragraph_id": 47, "text": "In 2022, for Metro Boomin and The Weeknd's song \"Creepin'\", Enya didn't approve of the song to be released under the working title \"IDWK\" (referring to the song I Don't Wanna Know) so Metro reportedly asked Enya to select song titles that she would be happy with, which included \"Undecided,\" \"Creepin'\", \"Don't Come Back to Me\", \"Better Off That Way\" and \"Wanna Let You Know\". Metro said \"Creepin''' was the one [...] It ended up being a blessing because it's the best name for it.\"", "title": "Career" }, { "paragraph_id": 48, "text": "In June 2023, Enya's 1997 limited compilation A Box of Dreams was re-issued on 6 vinyl LPs, featuring new liner notes. Nicky Ryan confirmed that they were working on a new album and the possibility of a book based on the trio's thoughts regarding the Oceans tracks was also mentioned. Enya's note, in Irish, read \"Beidh muid ag teacht le chéile gan mhoile\", which roughly translates to \"We will meet again soon\".", "title": "Career" }, { "paragraph_id": 49, "text": "On 19 September 2023, a watch party video for the 35th anniversary of Watermark was also presented. Alongside this, vinyl LPs of Watermark and a Dolby Atmos upmixed audio for Orinoco Flow were also released.", "title": "Career" }, { "paragraph_id": 50, "text": "Enya's vocal range has been described as mezzo-soprano. She has cited her musical foundations as \"the classics\", church music, and \"Irish reels and jigs\" with a particular interest in Sergei Rachmaninoff, a favourite composer of hers. She has an autographed picture of him in her home. Since 1982, she has recorded her music with Nicky Ryan as producer and arranger and his wife Roma Ryan as a lyricist. While in Clannad, Enya chose to work with Nicky as the two shared an interest in vocal harmonies, and Ryan, influenced by The Beach Boys and the \"Wall of Sound\" technique that Phil Spector pioneered, wanted to explore the idea of \"the multi-vocals\" for which her music became known. According to Enya, \"Angeles\" from Shepherd Moons has roughly 500 vocals recorded individually and layered. Enya performs all vocals and the majority of instruments in her songs, apart from guest musicians, playing percussion, guitar, violin, uilleann pipes, cornet, and double bass. Her early works, including Watermark, feature piano and numerous keyboard synthesisers including the Yamaha KX88 Master, Yamaha DX7, Oberheim Matrix, Kurzweil K250, Fairlight III E-mu Emulator II, Akai S900, PPG Wave Computer 360, Roland D-50 (famously used with the Pizzagogo patch in \"Orinoco Flow\"), and the Roland Juno-60, the latter a particular favourite of hers.", "title": "Musical style" }, { "paragraph_id": 51, "text": "Numerous critics and reviewers classify Enya's albums as new-age music and she has won four Grammy Awards in the category. However, Enya does not classify her music as part of the genre. When asked what genre she would classify her music, she replied \"Enya\". Nicky Ryan commented on the new age designation: \"Initially it was fine, but it's really not new age. Enya plays a whole lot of instruments, not just keyboards. Her melodies are strong and she sings a lot. So I can't see a comparison.\" Enya also said in 1988 of New Age music \"it's air, thin air\" and noted its spineless nature, unlike her own music. In a later interview, Enya said that she \"felt that title was given to any musician whom critics didn't know how to pigeonhole.\"", "title": "Musical style" }, { "paragraph_id": 52, "text": "Older artwork often inspires some of the visuals that accompany Enya's music. The 1991 music video for \"Caribbean Blue\", and the 1995 album cover artwork for The Memory of Trees both feature adapted works from artist Maxfield Parrish. In the 1996 music video for \"On My Way Home\", scenes of girls lighting paper lanterns to hang in flowery foliage were inspired by John Singer Sargent's painting Carnation, Lily, Lily, Rose.", "title": "Musical style" }, { "paragraph_id": 53, "text": "In addition to her native Irish, Enya has recorded songs in languages including English, French, Latin, Spanish, and Welsh. She has recorded music influenced by works from fantasy author J. R. R. Tolkien, including the instrumental \"Lothlórien\" from Shepherd Moons. For The Lord of the Rings: The Fellowship of the Ring, she sang \"May It Be\" in English and Tolkien's fictional language Quenya, and she sang \"Aníron\" in another of Tolkien's fictional languages, Sindarin. Amarantine and Dark Sky Island include songs sung in Loxian, a fictional language created by Roma Ryan, that has no official syntax. Its vocabulary was formed by Enya singing the song's notes to which Roma wrote their phonetic spelling.", "title": "Musical style" }, { "paragraph_id": 54, "text": "Enya adopted a composing and songwriting method that has deviated little throughout her career. At the start of the recording process for an album, she enters the studio, forgetting about her previous success, fame, and songs of hers that became hits. \"If I did that\", she said, \"I'd have to call it a day\". She then develops ideas on the piano, keeping note of any arrangement that can be worked on further. During her time writing, Enya works a five-day week, takes weekends off, and does not work on her music at home. With Irish as her first language, Enya initially records her songs in Irish as she can express \"feeling much more directly\" in Irish than in English. After some time, Enya presents her ideas to Nicky to discuss what pieces work best, while Roma works in parallel to devise lyrics for the songs. Enya considered \"Fallen Embers\" from A Day Without Rain a perfect time when the lyrics reflect how she felt while writing the song. In 2008, she newly discovered her tendency to write \"two or three songs\" during the winter months, work on the arrangements and lyrics the following spring and summer, and then work on the next couple of songs when autumn arrives.", "title": "Musical style" }, { "paragraph_id": 55, "text": "Enya says that Warner Music and she \"did not see eye to eye\" initially as the label imagined her performing on stage \"with a piano... maybe two or three synthesizer players and that's it\". Enya also explained that the time put into her studio albums caused her to \"run overtime\", leaving little time to plan for other such projects. She also expressed the difficulty in recreating her studio-oriented sound for the stage. In 1996, Ryan said Enya had received an offer worth almost £500,000 to perform a concert in Japan. In 2016, Enya spoke about the prospect of a live concert when she revealed talks with the Ryans during her three-year break after And Winter Came... (2008) to perform a show at the Metropolitan Opera House in New York City that would be simulcast to cinemas worldwide. Before such an event could happen, Nicky suggested that she enter a studio and record \"all the hits\" live with an orchestra and choir to see how they would sound.", "title": "Live performances" }, { "paragraph_id": 56, "text": "Enya has sung with live and lip-syncing vocals on various talk and music shows, events, and ceremonies throughout her career, most often during her worldwide press tours for each album. In December 1995, she performed \"Anywhere Is\" at a Christmas concert at Vatican City with Pope John Paul II in attendance; he later met and thanked her for performing. In April 1996, Enya performed the same song during her surprise appearance at the fiftieth birthday celebration for Carl XVI Gustaf, the king of Sweden and a fan of Enya's. In 1997, Enya participated in a live Christmas Eve broadcast in London and flew to County Donegal afterward to join her family for their annual midnight Mass choral performance, in which she participates each year. In March 2002, she performed \"May It Be\" with an orchestra at the year's Academy Awards ceremony. Enya and her sisters performed as part of the local choir Cor Mhuire in July 2005 at St. Mary's church in Gweedore during the annual Earagail Arts Festival.", "title": "Live performances" }, { "paragraph_id": 57, "text": "Known for her private lifestyle, Enya has said, \"The music is what sells. Not me, or what I stand for... that's the way I've always wanted it.\" She is unmarried and childfree, but has many nieces and nephews and is considered an aunt to the Ryans' two daughters, having shared their Artane home for almost a decade. In 1991, she said, \"I'm afraid of marriage because I'm afraid someone might want me because of who I am instead of because they loved me... I wouldn't go rushing into anything unexpected, but I do think a great deal about this.\" A relationship she had with one man ended in 1997, around the time when she considered taking time out of music to have a family, but found she was putting pressure on herself over the matter and \"gone the route [she] wanted to go\".", "title": "Personal life" }, { "paragraph_id": 58, "text": "At an auction in 1997, Enya spent £2.5 million on a 157-year-old Victorian listed castellated mansion in Killiney. Formerly known as Victoria Castle and Ayesha Castle, the house was renamed by Enya as Manderley Castle after the house featured in Daphne du Maurier's novel Rebecca (1938). She spent several years renovating the property and installing considerable security measures because of threats from stalkers. The improvements covered gaps in the house's outer wall, installed new solid timber entrance gates and 1.2-metre (4 ft) iron railings, and brought the surrounding 41 metres (135 ft) of stone wall up to a new height of 2.7 metres (9 ft). In late 2005, the property had two security breaches; during one incident, two people attacked and tied up one of her housekeepers before stealing several items. Enya alerted police by raising an alarm from her safe room. Enya oversaw most of the interior design (decorations and furnishings of her castle); she was \"not going to trust that to anyone else\".", "title": "Personal life" }, { "paragraph_id": 59, "text": "Enya is not known to express political opinions, although a translated quote from a Belgian interview in 1988 — \"generations old potentates and politicians have reduced the whole nation to beggary\" — provides a slight insight into her stance on political and social matters. She also admires the authors Oscar Wilde and J. R. R. Tolkien, in addition to the current Pope Francis. Enya has identified herself as \"more spiritual than religious\" and has said that she sometimes prays, but prefers \"going into churches when they're empty\".", "title": "Personal life" }, { "paragraph_id": 60, "text": "Aside from music, Enya has an appreciation for art, and as of 2000 was collecting artworks by Irish artists including Jack Butler Yeats and Louis le Brocquy and the British artist Albert Goodwin. Enya also enjoys watching operas, classic black-and-white films and crime drama series such as Breaking Bad, saying \"myself, Nicky, and Roma are huge fans of Breaking Bad. We just didn't miss an episode.\" First editions of books are also something Enya enjoys collecting.", "title": "Personal life" }, { "paragraph_id": 61, "text": "The discography of Enya includes 26.5 million certified album sales in the United States and an estimated 80 million record sales worldwide, making her one of the best-selling musicians of all time. A Day Without Rain is the best-selling new-age album, with an estimated 16 million copies sold worldwide. In the United Kingdom and Ireland, Enya's most successful single in the charts was \"Orinoco Flow (Sail Away)\", reaching number 1 on 23 October 1988, holding the top placing for three consecutive weeks. In 1991, Enya's album Shepherd Moons entered the charts at number 1 on 16 November 1991. Enya's awards include seven World Music Awards, four Grammy Awards for Best New Age Album, and an Ivor Novello Award. She was nominated for an Academy Award and a Golden Globe Award for \"May It Be\", a song she wrote for the film The Lord of the Rings: The Fellowship of the Ring (2001).", "title": "Discography" }, { "paragraph_id": 62, "text": "Studio albums", "title": "Discography" }, { "paragraph_id": 63, "text": "Billboard Music Awards", "title": "Recognition and legacy" }, { "paragraph_id": 64, "text": "Grammy Awards", "title": "Recognition and legacy" }, { "paragraph_id": 65, "text": "IFPI Hong Kong Top Sales Music Awards", "title": "Recognition and legacy" }, { "paragraph_id": 66, "text": "World Music Awards", "title": "Recognition and legacy" }, { "paragraph_id": 67, "text": "Žebřík Music Awards", "title": "Recognition and legacy" }, { "paragraph_id": 68, "text": "Other awards", "title": "Recognition and legacy" }, { "paragraph_id": 69, "text": "In 1991, a minor planet first discovered in 1978, 6433 Enya, was named after her. In June 2007, she received an honorary doctorate from the National University of Ireland, Galway, for her contributions to music. A month later, she also received an honorary DLitt from the University of Ulster. In 2017, a newly discovered species of fish, Leporinus enyae, found in the Orinoco River drainage area, was named after Enya, in reference to her song, \"Orinoco Flow\".", "title": "Recognition and legacy" }, { "paragraph_id": 70, "text": "Sources", "title": "References" } ]
Eithne Pádraigín Ní Bhraonáin, known mononymously as Enya, is an Irish singer and composer. Noted for her modern Celtic music, she is the best-selling Irish solo artist and the second-best-selling Irish music act overall, after rock band U2. Enya was raised in the Irish-speaking region of Gweedore. In 1980, Enya began her musical career playing alongside her family's Celtic folk band Clannad. She left Clannad in 1982 to pursue a solo career, working with the former Clannad manager and producer, Nicky Ryan, and his partner Roma, as their lyricist. Over the following four years, Enya developed her sound by combining multitracked vocals and keyboards with elements from a variety of musical genres such as Celtic, classical, church, jazz, new age, world, pop, and Irish folk. The two earliest releases by Enya were instrumentals for the Touch Travel (1984) cassette compilation. She composed most of the soundtrack and sang two songs intended for The Frog Prince (1985), and a body of work for the BBC documentary series The Celts (1986). A selection of her pieces for The Celts were released as her debut album, Enya (1987). She later signed with Warner Music UK, which granted her considerable artistic freedom and minimal interference. The success of Watermark (1988) propelled Enya to worldwide fame, helped mostly by the international hit single "Orinoco Flow". This was followed by the multi-million-selling albums Shepherd Moons (1991), The Memory of Trees (1995), and A Day Without Rain (2000). Sales of A Day Without Rain and its lead single, "Only Time", surged in the United States following its use in media coverage of the 9/11 attacks. After Amarantine (2005) and And Winter Came... (2008), Enya took a four-year break from music, returning in 2012 to begin work on her eighth studio album Dark Sky Island (2015). According to her sister Moya, Enya was recording music as of 2019.
2001-05-18T17:11:17Z
2023-12-30T10:41:45Z
[ "Template:Other uses", "Template:Infobox musical artist", "Template:Cite AV media", "Template:Award table", "Template:Won", "Template:Citation needed", "Template:Cite book", "Template:Wikiquote", "Template:AllMusic", "Template:Authority control", "Template:Listen", "Template:Convert", "Template:Awards table", "Template:Enya", "Template:Use Hiberno-English", "Template:Citation", "Template:Cite news", "Template:Main", "Template:Cite magazine", "Template:Cite episode", "Template:End", "Template:Cite journal", "Template:Cbignore", "Template:Cite AV media notes", "Template:Broadcast Film Critics Association Award for Best Song", "Template:Use dmy dates", "Template:Sfn", "Template:Nom", "Template:Reflist", "Template:Cite web", "Template:Commons category", "Template:Clannad" ]
https://en.wikipedia.org/wiki/Enya
9,483
East Berlin
East Berlin consisted of the Soviet Sector of Berlin and was part of, and the capital of East Germany. From August 13, 1961 until November 9, 1989 it was separated from West Berlin by the Berlin Wall. On October 3, 1990 West Germany and East Germany were united, thus formally ending the existence of East Berlin.
[ { "paragraph_id": 0, "text": "East Berlin consisted of the Soviet Sector of Berlin and was part of, and the capital of East Germany. From August 13, 1961 until November 9, 1989 it was separated from West Berlin by the Berlin Wall.", "title": "" }, { "paragraph_id": 1, "text": "On October 3, 1990 West Germany and East Germany were united, thus formally ending the existence of East Berlin.", "title": "" } ]
East Berlin consisted of the Soviet Sector of Berlin and was part of, and the capital of East Germany. From August 13, 1961 until November 9, 1989 it was separated from West Berlin by the Berlin Wall. On October 3, 1990 West Germany and East Germany were united, thus formally ending the existence of East Berlin.
2001-05-18T18:31:22Z
2023-12-15T14:07:15Z
[]
https://en.wikipedia.org/wiki/East_Berlin
9,486
List of international environmental agreements
This is a list of international environmental agreements. Most of the following agreements are legally binding for countries that have formally ratified them. Some, such as the Kyoto Protocol, differentiate between types of countries and each nation's respective responsibilities under the agreement. Several hundred international environmental agreements exist but most link only a limited number of countries. These bilateral or sometimes trilateral agreements are only binding for the countries that have ratified them but are nevertheless essential in the international environmental regime. Including the major conventions listed below, more than 3,000 international environmental instruments have been identified by the IEA Database Project.
[ { "paragraph_id": 0, "text": "This is a list of international environmental agreements.", "title": "" }, { "paragraph_id": 1, "text": "Most of the following agreements are legally binding for countries that have formally ratified them. Some, such as the Kyoto Protocol, differentiate between types of countries and each nation's respective responsibilities under the agreement. Several hundred international environmental agreements exist but most link only a limited number of countries. These bilateral or sometimes trilateral agreements are only binding for the countries that have ratified them but are nevertheless essential in the international environmental regime. Including the major conventions listed below, more than 3,000 international environmental instruments have been identified by the IEA Database Project.", "title": "" } ]
This is a list of international environmental agreements. Most of the following agreements are legally binding for countries that have formally ratified them. Some, such as the Kyoto Protocol, differentiate between types of countries and each nation's respective responsibilities under the agreement. Several hundred international environmental agreements exist but most link only a limited number of countries. These bilateral or sometimes trilateral agreements are only binding for the countries that have ratified them but are nevertheless essential in the international environmental regime. Including the major conventions listed below, more than 3,000 international environmental instruments have been identified by the IEA Database Project.
2001-10-23T02:59:25Z
2023-11-11T20:34:33Z
[ "Template:Col-end", "Template:Webarchive", "Template:TOC left", "Template:See", "Template:Col-begin", "Template:Environmental law", "Template:Clear", "Template:Div col end", "Template:Col-2", "Template:Aka", "Template:Short description", "Template:Reflist", "Template:Cite web", "Template:Pollution", "Template:Div col" ]
https://en.wikipedia.org/wiki/List_of_international_environmental_agreements
9,487
Epsilon
Epsilon (/ˈɛpsɪlɒn/, UK also /ɛpˈsaɪlən/; uppercase Ε, lowercase ε or lunate ϵ; Greek: έψιλον) is the fifth letter of the Greek alphabet, corresponding phonetically to a mid front unrounded vowel IPA: [e̞] or IPA: [ɛ̝]. In the system of Greek numerals it also has the value five. It was derived from the Phoenician letter He . Letters that arose from epsilon include the Roman E, Ë and Ɛ, and Cyrillic Е, È, Ё, Є and Э. The name of the letter was originally εἶ (Ancient Greek: [êː]), but it was later changed to ἒ ψιλόν (e psilon 'simple e') in the Middle Ages to distinguish the letter from the digraph αι, a former diphthong that had come to be pronounced the same as epsilon. The uppercase form of epsilon is identical to Latin E but has its own code point in Unicode: U+0395 Ε GREEK CAPITAL LETTER EPSILON. The lowercase version has two typographical variants, both inherited from medieval Greek handwriting. One, the most common in modern typography and inherited from medieval minuscule, looks like a reversed number "3" and is encoded U+03B5 ε GREEK SMALL LETTER EPSILON. The other, also known as lunate or uncial epsilon and inherited from earlier uncial writing, looks like a semicircle crossed by a horizontal bar: it is encoded U+03F5 ϵ GREEK LUNATE EPSILON SYMBOL. While in normal typography these are just alternative font variants, they may have different meanings as mathematical symbols: computer systems therefore offer distinct encodings for them. In TeX, \epsilon ( ϵ {\displaystyle \epsilon \!} ) denotes the lunate form, while \varepsilon ( ε {\displaystyle \varepsilon \!} ) denotes the reversed-3 form. Unicode versions 2.0.0 and onwards use ɛ as the lowercase Greek epsilon letter, but in version 1.0.0, ϵ was used. The lunate or uncial epsilon provided inspiration for the euro sign, €. There is also a 'Latin epsilon', ɛ or "open e", which looks similar to the Greek lowercase epsilon. It is encoded in Unicode as U+025B ɛ LATIN SMALL LETTER OPEN E and U+0190 Ɛ LATIN CAPITAL LETTER OPEN E and is used as an IPA phonetic symbol. This Latin uppercase epsilon, Ɛ, is not to be confused with the Greek uppercase Σ (sigma) The lunate epsilon, ϵ, is not to be confused with the set membership symbol ∈. The symbol \in, first used in set theory and logic by Giuseppe Peano and now used in mathematics in general for set membership ("belongs to") evolved from the letter epsilon, since the symbol was originally used as an abbreviation for the Latin word est. In addition, mathematicians often read the symbol ∈ as "element of", as in "1 is an element of the natural numbers" for 1\in\N, for example. As late as 1960, ε itself was used for set membership, while its negation "does not belong to" (now ∉) was denoted by ε' (epsilon prime). Only gradually did a fully separate, stylized symbol take the place of epsilon in this role. In a related context, Peano also introduced the use of a backwards epsilon, ϶, for the phrase "such that", although the abbreviation s.t. is occasionally used in place of ϶ in informal cardinals. The letter Ε was adopted from the Phoenician letter He () when Greeks first adopted alphabetic writing. In archaic Greek writing, its shape is often still identical to that of the Phoenician letter. Like other Greek letters, it could face either leftward or rightward (), depending on the current writing direction, but, just as in Phoenician, the horizontal bars always faced in the direction of writing. Archaic writing often preserves the Phoenician form with a vertical stem extending slightly below the lowest horizontal bar. In the classical era, through the influence of more cursive writing styles, the shape was simplified to the current E glyph. While the original pronunciation of the Phoenician letter He was [h], the earliest Greek sound value of Ε was determined by the vowel occurring in the Phoenician letter name, which made it a natural choice for being reinterpreted from a consonant symbol to a vowel symbol denoting an [e] sound. Besides its classical Greek sound value, the short /e/ phoneme, it could initially also be used for other [e]-like sounds. For instance, in early Attic before c. 500 BC, it was used also both for the long, open /ɛː/, and for the long close /eː/. In the former role, it was later replaced in the classic Greek alphabet by Eta (Η), which was taken over from eastern Ionic alphabets, while in the latter role it was replaced by the digraph spelling ΕΙ. Some dialects used yet other ways of distinguishing between various e-like sounds. In Corinth, the normal function of Ε to denote /e/ and /ɛː/ was taken by a glyph resembling a pointed B (), while Ε was used only for long close /eː/. The letter Beta, in turn, took the deviant shape . In Sicyon, a variant glyph resembling an X () was used in the same function as Corinthian . In Thespiai (Boeotia), a special letter form consisting of a vertical stem with a single rightward-pointing horizontal bar () was used for what was probably a raised variant of /e/ in pre-vocalic environments. This tack glyph was used elsewhere also as a form of "Heta", i.e. for the sound /h/. After the establishment of the canonical classical Ionian (Euclidean) Greek alphabet, new glyph variants for Ε were introduced through handwriting. In the uncial script (used for literary papyrus manuscripts in late antiquity and then in early medieval vellum codices), the "lunate" shape () became predominant. In cursive handwriting, a large number of shorthand glyphs came to be used, where the cross-bar and the curved stroke were linked in various ways. Some of them resembled a modern lowercase Latin "e", some a "6" with a connecting stroke to the next letter starting from the middle, and some a combination of two small "c"-like curves. Several of these shapes were later taken over into minuscule book hand. Of the various minuscule letter shapes, the inverted-3 form became the basis for lower-case Epsilon in Greek typography during the modern era. Despite its pronunciation as mid, in the International Phonetic Alphabet, the Latin epsilon /ɛ/ represents open-mid front unrounded vowel, as in the English word pet /pɛt/. The uppercase Epsilon is not commonly used outside of the Greek language because of its similarity to the Latin letter E. However, it is commonly used in structural mechanics with Young's Modulus equations for calculating tensile, compressive and areal strain. The Greek lowercase epsilon ε, the lunate epsilon symbol ϵ, and the Latin lowercase epsilon ɛ (see above) are used in a variety of places: These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style.
[ { "paragraph_id": 0, "text": "Epsilon (/ˈɛpsɪlɒn/, UK also /ɛpˈsaɪlən/; uppercase Ε, lowercase ε or lunate ϵ; Greek: έψιλον) is the fifth letter of the Greek alphabet, corresponding phonetically to a mid front unrounded vowel IPA: [e̞] or IPA: [ɛ̝]. In the system of Greek numerals it also has the value five. It was derived from the Phoenician letter He . Letters that arose from epsilon include the Roman E, Ë and Ɛ, and Cyrillic Е, È, Ё, Є and Э.", "title": "" }, { "paragraph_id": 1, "text": "The name of the letter was originally εἶ (Ancient Greek: [êː]), but it was later changed to ἒ ψιλόν (e psilon 'simple e') in the Middle Ages to distinguish the letter from the digraph αι, a former diphthong that had come to be pronounced the same as epsilon.", "title": "" }, { "paragraph_id": 2, "text": "The uppercase form of epsilon is identical to Latin E but has its own code point in Unicode: U+0395 Ε GREEK CAPITAL LETTER EPSILON. The lowercase version has two typographical variants, both inherited from medieval Greek handwriting. One, the most common in modern typography and inherited from medieval minuscule, looks like a reversed number \"3\" and is encoded U+03B5 ε GREEK SMALL LETTER EPSILON. The other, also known as lunate or uncial epsilon and inherited from earlier uncial writing, looks like a semicircle crossed by a horizontal bar: it is encoded U+03F5 ϵ GREEK LUNATE EPSILON SYMBOL. While in normal typography these are just alternative font variants, they may have different meanings as mathematical symbols: computer systems therefore offer distinct encodings for them. In TeX, \\epsilon ( ϵ {\\displaystyle \\epsilon \\!} ) denotes the lunate form, while \\varepsilon ( ε {\\displaystyle \\varepsilon \\!} ) denotes the reversed-3 form. Unicode versions 2.0.0 and onwards use ɛ as the lowercase Greek epsilon letter, but in version 1.0.0, ϵ was used. The lunate or uncial epsilon provided inspiration for the euro sign, €.", "title": "" }, { "paragraph_id": 3, "text": "There is also a 'Latin epsilon', ɛ or \"open e\", which looks similar to the Greek lowercase epsilon. It is encoded in Unicode as U+025B ɛ LATIN SMALL LETTER OPEN E and U+0190 Ɛ LATIN CAPITAL LETTER OPEN E and is used as an IPA phonetic symbol. This Latin uppercase epsilon, Ɛ, is not to be confused with the Greek uppercase Σ (sigma)", "title": "" }, { "paragraph_id": 4, "text": "The lunate epsilon, ϵ, is not to be confused with the set membership symbol ∈. The symbol \\in, first used in set theory and logic by Giuseppe Peano and now used in mathematics in general for set membership (\"belongs to\") evolved from the letter epsilon, since the symbol was originally used as an abbreviation for the Latin word est. In addition, mathematicians often read the symbol ∈ as \"element of\", as in \"1 is an element of the natural numbers\" for 1\\in\\N, for example. As late as 1960, ε itself was used for set membership, while its negation \"does not belong to\" (now ∉) was denoted by ε' (epsilon prime). Only gradually did a fully separate, stylized symbol take the place of epsilon in this role. In a related context, Peano also introduced the use of a backwards epsilon, ϶, for the phrase \"such that\", although the abbreviation s.t. is occasionally used in place of ϶ in informal cardinals.", "title": "" }, { "paragraph_id": 5, "text": "The letter Ε was adopted from the Phoenician letter He () when Greeks first adopted alphabetic writing. In archaic Greek writing, its shape is often still identical to that of the Phoenician letter. Like other Greek letters, it could face either leftward or rightward (), depending on the current writing direction, but, just as in Phoenician, the horizontal bars always faced in the direction of writing. Archaic writing often preserves the Phoenician form with a vertical stem extending slightly below the lowest horizontal bar. In the classical era, through the influence of more cursive writing styles, the shape was simplified to the current E glyph.", "title": "History" }, { "paragraph_id": 6, "text": "While the original pronunciation of the Phoenician letter He was [h], the earliest Greek sound value of Ε was determined by the vowel occurring in the Phoenician letter name, which made it a natural choice for being reinterpreted from a consonant symbol to a vowel symbol denoting an [e] sound. Besides its classical Greek sound value, the short /e/ phoneme, it could initially also be used for other [e]-like sounds. For instance, in early Attic before c. 500 BC, it was used also both for the long, open /ɛː/, and for the long close /eː/. In the former role, it was later replaced in the classic Greek alphabet by Eta (Η), which was taken over from eastern Ionic alphabets, while in the latter role it was replaced by the digraph spelling ΕΙ.", "title": "History" }, { "paragraph_id": 7, "text": "Some dialects used yet other ways of distinguishing between various e-like sounds.", "title": "History" }, { "paragraph_id": 8, "text": "In Corinth, the normal function of Ε to denote /e/ and /ɛː/ was taken by a glyph resembling a pointed B (), while Ε was used only for long close /eː/. The letter Beta, in turn, took the deviant shape .", "title": "History" }, { "paragraph_id": 9, "text": "In Sicyon, a variant glyph resembling an X () was used in the same function as Corinthian .", "title": "History" }, { "paragraph_id": 10, "text": "In Thespiai (Boeotia), a special letter form consisting of a vertical stem with a single rightward-pointing horizontal bar () was used for what was probably a raised variant of /e/ in pre-vocalic environments. This tack glyph was used elsewhere also as a form of \"Heta\", i.e. for the sound /h/.", "title": "History" }, { "paragraph_id": 11, "text": "", "title": "History" }, { "paragraph_id": 12, "text": "After the establishment of the canonical classical Ionian (Euclidean) Greek alphabet, new glyph variants for Ε were introduced through handwriting. In the uncial script (used for literary papyrus manuscripts in late antiquity and then in early medieval vellum codices), the \"lunate\" shape () became predominant. In cursive handwriting, a large number of shorthand glyphs came to be used, where the cross-bar and the curved stroke were linked in various ways. Some of them resembled a modern lowercase Latin \"e\", some a \"6\" with a connecting stroke to the next letter starting from the middle, and some a combination of two small \"c\"-like curves. Several of these shapes were later taken over into minuscule book hand. Of the various minuscule letter shapes, the inverted-3 form became the basis for lower-case Epsilon in Greek typography during the modern era.", "title": "History" }, { "paragraph_id": 13, "text": "Despite its pronunciation as mid, in the International Phonetic Alphabet, the Latin epsilon /ɛ/ represents open-mid front unrounded vowel, as in the English word pet /pɛt/.", "title": "Uses" }, { "paragraph_id": 14, "text": "The uppercase Epsilon is not commonly used outside of the Greek language because of its similarity to the Latin letter E. However, it is commonly used in structural mechanics with Young's Modulus equations for calculating tensile, compressive and areal strain.", "title": "Uses" }, { "paragraph_id": 15, "text": "The Greek lowercase epsilon ε, the lunate epsilon symbol ϵ, and the Latin lowercase epsilon ɛ (see above) are used in a variety of places:", "title": "Uses" }, { "paragraph_id": 16, "text": "These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style.", "title": "Unicode" } ]
Epsilon is the fifth letter of the Greek alphabet, corresponding phonetically to a mid front unrounded vowel IPA: or IPA:. In the system of Greek numerals it also has the value five. It was derived from the Phoenician letter He. Letters that arose from epsilon include the Roman E, Ë and Ɛ, and Cyrillic Е, È, Ё, Є and Э. The name of the letter was originally εἶ, but it was later changed to ἒ ψιλόν in the Middle Ages to distinguish the letter from the digraph αι, a former diphthong that had come to be pronounced the same as epsilon. The uppercase form of epsilon is identical to Latin E but has its own code point in Unicode: U+0395 Ε GREEK CAPITAL LETTER EPSILON. The lowercase version has two typographical variants, both inherited from medieval Greek handwriting. One, the most common in modern typography and inherited from medieval minuscule, looks like a reversed number "3" and is encoded U+03B5 ε GREEK SMALL LETTER EPSILON. The other, also known as lunate or uncial epsilon and inherited from earlier uncial writing, looks like a semicircle crossed by a horizontal bar: it is encoded U+03F5 ϵ GREEK LUNATE EPSILON SYMBOL. While in normal typography these are just alternative font variants, they may have different meanings as mathematical symbols: computer systems therefore offer distinct encodings for them. In TeX, \epsilon denotes the lunate form, while \varepsilon denotes the reversed-3 form. Unicode versions 2.0.0 and onwards use ɛ as the lowercase Greek epsilon letter, but in version 1.0.0, ϵ was used. The lunate or uncial epsilon provided inspiration for the euro sign, €. There is also a 'Latin epsilon', ɛ or "open e", which looks similar to the Greek lowercase epsilon. It is encoded in Unicode as U+025B ɛ LATIN SMALL LETTER OPEN E and U+0190 Ɛ LATIN CAPITAL LETTER OPEN E and is used as an IPA phonetic symbol. This Latin uppercase epsilon, Ɛ, is not to be confused with the Greek uppercase Σ (sigma) The lunate epsilon, ϵ, is not to be confused with the set membership symbol ∈. The symbol \in, first used in set theory and logic by Giuseppe Peano and now used in mathematics in general for set membership evolved from the letter epsilon, since the symbol was originally used as an abbreviation for the Latin word est. In addition, mathematicians often read the symbol ∈ as "element of", as in "1 is an element of the natural numbers" for 1\in\N, for example. As late as 1960, ε itself was used for set membership, while its negation "does not belong to" was denoted by ε'. Only gradually did a fully separate, stylized symbol take the place of epsilon in this role. In a related context, Peano also introduced the use of a backwards epsilon, ϶, for the phrase "such that", although the abbreviation s.t. is occasionally used in place of ϶ in informal cardinals.
2002-02-25T15:43:11Z
2023-12-26T15:24:08Z
[ "Template:Short description", "Template:IPA-el", "Template:Wikt-lang", "Template:Math", "Template:Anchor", "Template:Cite OED", "Template:Webarchive", "Template:Cite book", "Template:Cite web", "Template:Greek Alphabet", "Template:Lang", "Template:IPA", "Template:Unichar", "Template:Code", "Template:Cite encyclopedia", "Template:ISBN", "Template:Distinguish", "Template:Char", "Template:Circa", "Template:About", "Template:IPAc-en", "Template:Lang-el", "Template:Charmap", "Template:Reflist", "Template:Wiktionary" ]
https://en.wikipedia.org/wiki/Epsilon
9,488
Eta
Eta /ˈiːtə, ˈeɪtə/ EE-tə, AY-tə (uppercase Η, lowercase η; Ancient Greek: ἦτα ē̂ta [ɛ̂ːta] or Greek: ήτα ita [ˈita]) is the seventh letter of the Greek alphabet, representing the close front unrounded vowel IPA: [i]. Originally denoting the voiceless glottal fricative IPA: [h] in most dialects, its sound value in the classical Attic dialect of Ancient Greek was a long open-mid front unrounded vowel IPA: [ɛː], raised to IPA: [i] in hellenistic Greek, a process known as iotacism or itacism. In the ancient Attic number system (Herodianic or acrophonic numbers), the number 100 was represented by "Η", because it was the initial of ΗΕΚΑΤΟΝ, the ancient spelling of ἑκατόν = "one hundred". In the later system of (Classical) Greek numerals eta represents 8. Eta was derived from the Phoenician letter heth . Letters that arose from eta include the Latin H and the Cyrillic letters И and Й. The letter shape 'H' was originally used in most Greek dialects to represent the voiceless glottal fricative IPA: [h]. In this function, it was borrowed in the 8th century BC by the Etruscan and other Old Italic alphabets, which were based on the Euboean form of the Greek alphabet. This also gave rise to the Latin alphabet with its letter H. Other regional variants of the Greek alphabet (epichoric alphabets), in dialects that still preserved the sound IPA: [h], employed various glyph shapes for consonantal heta side by side with the new vocalic eta for some time. In the southern Italian colonies of Heracleia and Tarentum, the letter shape was reduced to a "half-heta" lacking the right vertical stem (Ͱ). From this sign later developed the sign for rough breathing or spiritus asper, which brought back the marking of the IPA: [h] sound into the standardized post-classical (polytonic) orthography. Dionysius Thrax in the second century BC records that the letter name was still pronounced heta (ἥτα), correctly explaining this irregularity by stating "in the old days the letter Η served to stand for the rough breathing, as it still does with the Romans." In the East Ionic dialect, however, the sound IPA: [h] disappeared by the sixth century BC, and the letter was re-used initially to represent a development of a long open front unrounded vowel IPA: [aː], which later merged in East Ionic with the long open-mid front unrounded vowel IPA: [ɛː] instead. In 403 BC, Athens took over the Ionian spelling system and with it the vocalic use of H (even though it still also had the IPA: [h] sound itself at that time). This later became the standard orthography in all of Greece. During the time of post-classical Koiné Greek, the IPA: [ɛː] sound represented by eta was raised and merged with several other formerly distinct vowels, a phenomenon called iotacism or itacism, after the new pronunciation of the letter name as ita instead of eta. Itacism is continued into Modern Greek, where the letter name is pronounced [ˈita] and represents the close front unrounded vowel IPA: [i]. It shares this function with several other letters (ι, υ) and digraphs (ει, οι), which are all pronounced alike. Eta was also borrowed with the sound value of [i] into the Cyrillic script, where it gave rise to the Cyrillic letter И. In Modern Greek, due to iotacism, the letter (pronounced [ˈita]) represents a close front unrounded vowel, IPA: [i]. In Classical Greek, it represented the long open-mid front unrounded vowel IPA: [ɛː]. The uppercase letter Η is used as a symbol in textual criticism for the Alexandrian text-type (from Hesychius, its once-supposed editor). In chemistry, the letter H as symbol of enthalpy sometimes is said to be a Greek eta, but since enthalpy comes from ἐνθάλπος, which begins in a smooth breathing and epsilon, it is more likely a Latin H for 'heat'. In information theory the uppercase Greek letter Η is used to represent the concept of entropy of a discrete random variable. The lowercase letter η is used as a symbol in: These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style.
[ { "paragraph_id": 0, "text": "Eta /ˈiːtə, ˈeɪtə/ EE-tə, AY-tə (uppercase Η, lowercase η; Ancient Greek: ἦτα ē̂ta [ɛ̂ːta] or Greek: ήτα ita [ˈita]) is the seventh letter of the Greek alphabet, representing the close front unrounded vowel IPA: [i]. Originally denoting the voiceless glottal fricative IPA: [h] in most dialects, its sound value in the classical Attic dialect of Ancient Greek was a long open-mid front unrounded vowel IPA: [ɛː], raised to IPA: [i] in hellenistic Greek, a process known as iotacism or itacism.", "title": "" }, { "paragraph_id": 1, "text": "In the ancient Attic number system (Herodianic or acrophonic numbers), the number 100 was represented by \"Η\", because it was the initial of ΗΕΚΑΤΟΝ, the ancient spelling of ἑκατόν = \"one hundred\". In the later system of (Classical) Greek numerals eta represents 8.", "title": "" }, { "paragraph_id": 2, "text": "Eta was derived from the Phoenician letter heth . Letters that arose from eta include the Latin H and the Cyrillic letters И and Й.", "title": "" }, { "paragraph_id": 3, "text": "The letter shape 'H' was originally used in most Greek dialects to represent the voiceless glottal fricative IPA: [h]. In this function, it was borrowed in the 8th century BC by the Etruscan and other Old Italic alphabets, which were based on the Euboean form of the Greek alphabet. This also gave rise to the Latin alphabet with its letter H.", "title": "History" }, { "paragraph_id": 4, "text": "Other regional variants of the Greek alphabet (epichoric alphabets), in dialects that still preserved the sound IPA: [h], employed various glyph shapes for consonantal heta side by side with the new vocalic eta for some time. In the southern Italian colonies of Heracleia and Tarentum, the letter shape was reduced to a \"half-heta\" lacking the right vertical stem (Ͱ). From this sign later developed the sign for rough breathing or spiritus asper, which brought back the marking of the IPA: [h] sound into the standardized post-classical (polytonic) orthography. Dionysius Thrax in the second century BC records that the letter name was still pronounced heta (ἥτα), correctly explaining this irregularity by stating \"in the old days the letter Η served to stand for the rough breathing, as it still does with the Romans.\"", "title": "History" }, { "paragraph_id": 5, "text": "In the East Ionic dialect, however, the sound IPA: [h] disappeared by the sixth century BC, and the letter was re-used initially to represent a development of a long open front unrounded vowel IPA: [aː], which later merged in East Ionic with the long open-mid front unrounded vowel IPA: [ɛː] instead. In 403 BC, Athens took over the Ionian spelling system and with it the vocalic use of H (even though it still also had the IPA: [h] sound itself at that time). This later became the standard orthography in all of Greece.", "title": "History" }, { "paragraph_id": 6, "text": "During the time of post-classical Koiné Greek, the IPA: [ɛː] sound represented by eta was raised and merged with several other formerly distinct vowels, a phenomenon called iotacism or itacism, after the new pronunciation of the letter name as ita instead of eta.", "title": "History" }, { "paragraph_id": 7, "text": "Itacism is continued into Modern Greek, where the letter name is pronounced [ˈita] and represents the close front unrounded vowel IPA: [i]. It shares this function with several other letters (ι, υ) and digraphs (ει, οι), which are all pronounced alike.", "title": "History" }, { "paragraph_id": 8, "text": "Eta was also borrowed with the sound value of [i] into the Cyrillic script, where it gave rise to the Cyrillic letter И.", "title": "History" }, { "paragraph_id": 9, "text": "In Modern Greek, due to iotacism, the letter (pronounced [ˈita]) represents a close front unrounded vowel, IPA: [i]. In Classical Greek, it represented the long open-mid front unrounded vowel IPA: [ɛː].", "title": "Uses" }, { "paragraph_id": 10, "text": "The uppercase letter Η is used as a symbol in textual criticism for the Alexandrian text-type (from Hesychius, its once-supposed editor).", "title": "Uses" }, { "paragraph_id": 11, "text": "In chemistry, the letter H as symbol of enthalpy sometimes is said to be a Greek eta, but since enthalpy comes from ἐνθάλπος, which begins in a smooth breathing and epsilon, it is more likely a Latin H for 'heat'.", "title": "Uses" }, { "paragraph_id": 12, "text": "In information theory the uppercase Greek letter Η is used to represent the concept of entropy of a discrete random variable.", "title": "Uses" }, { "paragraph_id": 13, "text": "The lowercase letter η is used as a symbol in:", "title": "Uses" }, { "paragraph_id": 14, "text": "These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style.", "title": "Character encodings" } ]
Eta is the seventh letter of the Greek alphabet, representing the close front unrounded vowel IPA:. Originally denoting the voiceless glottal fricative IPA: in most dialects, its sound value in the classical Attic dialect of Ancient Greek was a long open-mid front unrounded vowel IPA:, raised to IPA: in hellenistic Greek, a process known as iotacism or itacism. In the ancient Attic number system, the number 100 was represented by "Η", because it was the initial of ΗΕΚΑΤΟΝ, the ancient spelling of ἑκατόν = "one hundred". In the later system of (Classical) Greek numerals eta represents 8. Eta was derived from the Phoenician letter heth. Letters that arose from eta include the Latin H and the Cyrillic letters И and Й.
2002-02-25T15:43:11Z
2023-09-22T18:40:13Z
[ "Template:Lang-grc", "Template:IPA", "Template:Reflist", "Template:Lang", "Template:Main", "Template:Wiktionary", "Template:Webarchive", "Template:Greek Alphabet", "Template:Respell", "Template:Script", "Template:OED", "Template:Short description", "Template:Lang-ell", "Template:Charmap", "Template:IPA-el", "Template:Cite book", "Template:For", "Template:About", "Template:IPAc-en" ]
https://en.wikipedia.org/wiki/Eta
9,491
Eskimo
Eskimo (/ˈɛskɪmoʊ/) is an exonym used to refer to two closely related Indigenous peoples: Inuit (including the Alaska Native Iñupiat, the Canadian Inuit, and the Greenlandic Inuit) and the Yupik (or Yuit) of eastern Siberia and Alaska. A related third group, the Aleut, which inhabit the Aleutian Islands, are generally excluded from the definition of Eskimo. The three groups share a relatively recent common ancestor, and speak related languages belonging to the Eskaleut language family. These circumpolar peoples have traditionally inhabited the Arctic and subarctic regions from eastern Siberia (Russia) to Alaska (United States), Northern Canada, Nunavik, Nunatsiavut, and Greenland. Many Inuit, Yupik, Aleut, and other individuals consider the term Eskimo, which is of a disputed etymology, to be offensive and even pejorative. Eskimo continues to be used within a historical, linguistic, archaeological, and cultural context. The governments in Canada and the United States have made moves to cease using the term Eskimo in official documents, but it has not been eliminated, as the word is in some places written into tribal, and therefore national, legal terminology. Canada officially uses the term Inuit to describe the indigenous Canadian people who are living in the country's northern sectors and are not First Nations or Métis. The United States government legally uses Alaska Native for Native Alaskans including the Yupik, Inuit, and Aleut, but also for non-Eskimo Native Alaskans including the Tlingit, the Haida, the Eyak, and the Tsimshian, in addition to at least nine separate northern Athabaskan/Dene peoples. The designation Alaska Native applies to enrolled tribal members only, in contrast to individual Eskimo/Aleut persons claiming descent from the world's "most widespread aboriginal group". There are between 171,000 and 187,000 Inuit and Yupik, the majority of whom live in or near their traditional circumpolar homeland. Of these, 53,785 (2010) live in the United States, 65,025 (2016) in Canada, 51,730 (2021) in Greenland and 1657 (2021) in Russia. In addition, 16,730 people living in Denmark were born in Greenland. The non-governmental organization (NGO) known as the Inuit Circumpolar Council claims to represent 180,000 people. The non-Inuit sub-branch of the Eskimo branch of the Eskaleut language family consists of four distinct Yupik languages. Two of them are used in the Russian Far East as well as on St. Lawrence Island, and two of them are used in western Alaska, southwestern Alaska, and the western part of Southcentral Alaska. The extinct language of the Sirenik people is sometimes claimed to be related to these other languages. A variety of theories have been postulated for the etymological origin of the word Eskimo. According to Smithsonian linguist Ives Goddard, etymologically the word derives from the Innu-aimun (Montagnais) word ayas̆kimew, meaning "a person who laces a snowshoe", and is related to husky (a breed of dog). The word assime·w means "she laces a snowshoe" in Innu, and Innu language speakers refer to the neighbouring Mi'kmaq people using words that sound like eskimo. This interpretation is generally confirmed by more recent academic sources. In 1978, José Mailhot, a Quebec anthropologist who speaks Innu-aimun (Montagnais), published a paper suggesting that Eskimo meant "people who speak a different language". French traders who encountered the Innu (Montagnais) in the eastern areas adopted their word for the more western peoples and spelled it as Esquimau or Esquimaux in a transliteration. Some people consider Eskimo offensive, because it is popularly perceived to mean "eaters of raw meat" in Algonquian languages common to people along the Atlantic coast. An unnamed Cree speaker suggested the original word that became corrupted to Eskimo might have been askamiciw (meaning "he eats it raw"); Inuit are referred to in some Cree texts as askipiw (meaning "eats something raw"). Regardless, the term still carries a derogatory connotation for many Inuit and Yupik. One of the first printed uses of the French word Esquimaux comes from Samuel Hearne's A Journey from Prince of Wales's Fort in Hudson's Bay to the Northern Ocean in the Years 1769, 1770, 1771, 1772 first published in 1795. The term Eskimo is still used by people to encompass Inuit and Yupik, as well as other Indigenous or Alaska Native and Siberian peoples. In the 21st century, usage in North America has declined. Linguistic, ethnic, and cultural differences exist between Yupik and Inuit. In Canada and Greenland, and to a certain extent in Alaska, the term Eskimo is predominantly seen as offensive and has been widely replaced by the term Inuit or terms specific to a particular group or community. This has resulted in a trend whereby some non-Indigenous people believe that they should use Inuit even for Yupik who are non-Inuit. Greenlandic Inuit generally refer to themselves as Greenlanders ("Kalaallit" or "Grønlændere") and speak the Greenlandic language and Danish. Greenlandic Inuit belong to three groups: the Kalaallit of west Greenland, who speak Kalaallisut; the Tunumiit of Tunu (east Greenland), who speak Tunumiit oraasiat ("East Greenlandic"); and the Inughuit of north Greenland, who speak Inuktun. The word "Eskimo" is a racially charged term in Canada. In Canada's Central Arctic, Inuinnaq is the preferred term, and in the eastern Canadian Arctic Inuit. The language is often called Inuktitut, though other local designations are also used. Section 25 of the Canadian Charter of Rights and Freedoms and section 35 of the Canadian Constitution Act of 1982 recognized Inuit as a distinctive group of Aboriginal peoples in Canada. Although Inuit can be applied to all of the Eskimo peoples in Canada and Greenland, that is not true in Alaska and Siberia. In Alaska, the term Eskimo is still used because it includes both Iñupiat (singular: Iñupiaq), who are Inuit, and Yupik, who are not. The term Alaska Native is inclusive of (and under U.S. and Alaskan law, as well as the linguistic and cultural legacy of Alaska, refers to) all Indigenous peoples of Alaska, including not only the Iñupiat (Alaskan Inuit) and the Yupik, but also groups such as the Aleut, who share a recent ancestor, as well as the largely unrelated indigenous peoples of the Pacific Northwest Coast and the Alaskan Athabaskans, such as the Eyak people. The term Alaska Native has important legal usage in Alaska and the rest of the United States as a result of the Alaska Native Claims Settlement Act of 1971. It does not apply to Inuit or Yupik originating outside the state. As a result, the term Eskimo is still in use in Alaska. Alternative terms, such as Inuit-Yupik, have been proposed, but none has gained widespread acceptance. Early 21st century population estimates registered more than 135,000 individuals of Eskimo descent, with approximately 85,000 living in North America, 50,000 in Greenland, and the rest residing in Siberia. In 1977, the Inuit Circumpolar Conference (ICC) meeting in Utqiaġvik, Alaska, officially adopted Inuit as a designation for all circumpolar Native peoples, regardless of their local view on an appropriate term. They voted to replace the word Eskimo with Inuit. Even at that time, such a designation was not accepted by all. As a result, the Canadian government usage has replaced the term Eskimo with Inuit (Inuk in singular). The ICC charter defines Inuit as including "the Inupiat, Yupik (Alaska), Inuit, Inuvialuit (Canada), Kalaallit (Greenland) and Yupik (Russia)". Despite the ICC's 1977 decision to adopt the term Inuit, this has not been accepted by all or even most Yupik people. In 2010, the ICC passed a resolution in which they implored scientists to use Inuit and Paleo-Inuit instead of Eskimo or Paleo-Eskimo. In a 2015 commentary in the journal Arctic, Canadian archaeologist Max Friesen argued fellow Arctic archaeologists should follow the ICC and use Paleo-Inuit instead of Paleo-Eskimo. In 2016, Lisa Hodgetts and Arctic editor Patricia Wells wrote: "In the Canadian context, continued use of any term that incorporates Eskimo is potentially harmful to the relationships between archaeologists and the Inuit and Inuvialuit communities who are our hosts and increasingly our research partners." Hodgetts and Wells suggested using more specific terms when possible (e.g., Dorset and Groswater) and agreed with Frieson in using the Inuit tradition to replace Neo-Eskimo, although they noted replacement for Palaeoeskimo was still an open question and discussed Paleo-Inuit, Arctic Small Tool Tradition, and pre-Inuit, as well as Inuktitut loanwords like Tuniit and Sivullirmiut, as possibilities. In 2020, Katelyn Braymer-Hayes and colleagues argued in the Journal of Anthropological Archaeology that there is a "clear need" to replace the terms Neo-Eskimo and Paleo-Eskimo, citing the ICC resolution, but finding a consensus within the Alaskan context particularly is difficult, since Alaska Natives do not use the word Inuit to describe themselves nor is the term legally applicable only to Iñupiat and Yupik in Alaska, and as such, terms used in Canada like Paleo Inuit and Ancestral Inuit would not be acceptable. American linguist Lenore Grenoble has also explicitly deferred to the ICC resolution and used Inuit–Yupik instead of Eskimo with regards to the language branch. Genetic evidence suggests that the Americas were populated from northeastern Asia in multiple waves. While the great majority of indigenous American peoples can be traced to a single early migration of Paleo-Indians, the Na-Dené, Inuit and Indigenous Alaskan populations exhibit admixture from distinct populations that migrated into America at a later date and are closely linked to the peoples of far northeastern Asia (e.g. Chukchi), and only more remotely to the majority indigenous American type. For modern Eskimo–Aleut speakers, this later ancestral component makes up almost half of their genomes. The ancient Paleo-Eskimo population was genetically distinct from the modern circumpolar populations, but eventually derives from the same far northeastern Asian cluster. It is understood that some or all of these ancient people migrated across the Chukchi Sea to North America during the pre-neolithic era, somewhere around 5,000 to 10,000 years ago. It is believed that ancestors of the Aleut people inhabited the Aleutian Chain 10,000 years ago. The earliest positively identified Paleo-Eskimo cultures (Early Paleo-Eskimo) date to 5,000 years ago. Several earlier indigenous peoples existed in the northern circumpolar regions of eastern Siberia, Alaska, and Canada (although probably not in Greenland). The Paleo-Eskimo peoples appear to have developed in Alaska from people related to the Arctic small tool tradition in eastern Asia, whose ancestors had probably migrated to Alaska at least 3,000 to 5,000 years earlier. The Yupik languages and cultures in Alaska evolved in place, beginning with the original pre-Dorset Indigenous culture developed in Alaska. At least 4,000 years ago, the Unangan culture of the Aleut became distinct. It is not generally considered an Eskimo culture. However, there is some possibility of an Aleutian origin of the Dorset people, who in turn are a likely ancestor of today's Inuit and Yupik. Approximately 1,500 to 2,000 years ago, apparently in northwestern Alaska, two other distinct variations appeared. Inuit language became distinct and, over a period of several centuries, its speakers migrated across northern Alaska, through Canada, and into Greenland. The distinct culture of the Thule people (drawing strongly from the Birnirk culture) developed in northwestern Alaska. It very quickly spread over the entire area occupied by Eskimo peoples, though it was not necessarily adopted by all of them. The Eskimo–Aleut family of languages includes two cognate branches: the Aleut (Unangan) branch and the Eskimo branch. The number of cases varies, with Aleut languages having a greatly reduced case system compared to those of the Eskimo subfamily. Eskimo–Aleut languages possess voiceless plosives at the bilabial, coronal, velar and uvular positions in all languages except Aleut, which has lost the bilabial stops but retained the nasal. In the Eskimo subfamily a voiceless alveolar lateral fricative is also present. The Eskimo sub-family consists of the Inuit language and Yupik language sub-groups. The Sirenikski language, which is virtually extinct, is sometimes regarded as a third branch of the Eskimo language family. Other sources regard it as a group belonging to the Yupik branch. Inuit languages comprise a dialect continuum, or dialect chain, that stretches from Unalakleet and Norton Sound in Alaska, across northern Alaska and Canada, and east to Greenland. Changes from western (Iñupiaq) to eastern dialects are marked by the dropping of vestigial Yupik-related features, increasing consonant assimilation (e.g., kumlu, meaning "thumb", changes to kuvlu, changes to kublu, changes to kulluk, changes to kulluq,) and increased consonant lengthening, and lexical change. Thus, speakers of two adjacent Inuit dialects would usually be able to understand one another, but speakers from dialects distant from each other on the dialect continuum would have difficulty understanding one another. Seward Peninsula dialects in western Alaska, where much of the Iñupiat culture has been in place for perhaps less than 500 years, are greatly affected by phonological influence from the Yupik languages. Eastern Greenlandic, at the opposite end of Inuit range, has had significant word replacement due to a unique form of ritual name avoidance. Ethnographically, Greenlandic Inuit belong to three groups: the Kalaallit of west Greenland, who speak Kalaallisut; the Tunumiit of Tunu (east Greenland), who speak Tunumiit oraasiat ("East Greenlandic"), and the Inughuit of north Greenland, who speak Inuktun. The four Yupik languages, by contrast, including Alutiiq (Sugpiaq), Central Alaskan Yup'ik, Naukan (Naukanski), and Siberian Yupik, are distinct languages with phonological, morphological, and lexical differences. They demonstrate limited mutual intelligibility. Additionally, both Alutiiq and Central Yup'ik have considerable dialect diversity. The northernmost Yupik languages – Siberian Yupik and Naukan Yupik – are linguistically only slightly closer to Inuit than is Alutiiq, which is the southernmost of the Yupik languages. Although the grammatical structures of Yupik and Inuit languages are similar, they have pronounced differences phonologically. Differences of vocabulary between Inuit and any one of the Yupik languages are greater than between any two Yupik languages. Even the dialectal differences within Alutiiq and Central Alaskan Yup'ik sometimes are relatively great for locations that are relatively close geographically. Despite the relatively small population of Naukan speakers, documentation of the language dates back to 1732. While Naukan is only spoken in Siberia, the language acts as an intermediate between two Alaskan languages: Siberian Yupik Eskimo and Central Yup'ik Eskimo. The Sirenikski language is sometimes regarded as a third branch of the Eskimo language family, but other sources regard it as a group belonging to the Yupik branch. An overview of the Eskimo–Aleut languages family is given below: American linguist Lenore Grenoble has explicitly deferred to this resolution and used Inuit–Yupik instead of Eskimo with regards to the language branch. There has been a long-running linguistic debate about whether or not the speakers of the Eskimo-Aleut language group have an unusually large number of words for snow. The general modern consensus is that, in multiple Eskimo languages, there are, or have been in simultaneous usage, indeed fifty plus words for snow. Historically Inuit cuisine, which is taken here to include Greenlandic cuisine, Yup'ik cuisine and Aleut cuisine, consisted of a diet of animal source foods that were fished, hunted, and gathered locally. Inuit inhabit the Arctic and northern Bering Sea coasts of Alaska in the United States, and Arctic coasts of the Northwest Territories, Nunavut, Quebec, and Labrador in Canada, and Greenland (associated with Denmark). Until fairly recent times, there has been a remarkable homogeneity in the culture throughout this area, which traditionally relied on fish, marine mammals, and land animals for food, heat, light, clothing, and tools. Their food sources primarily relied on seals, whales, whale blubber, walrus, and fish, all of which they hunted using harpoons on the ice. Clothing consisted of robes made of wolfskin and reindeer skin to acclimate to the low temperatures. They maintain a unique Inuit culture. Greenlandic Inuit make up 90% of Greenland's population. They belong to three major groups: Canadian Inuit live primarily in Inuit Nunangat (lit. "lands, waters and ices of the [Inuit] people"), their traditional homeland although some people live in southern parts of Canada. Inuit Nunangat ranges from the Yukon–Alaska border in the west across the Arctic to northern Labrador. The Inuvialuit live in the Inuvialuit Settlement Region, the northern part of Yukon and the Northwest Territories, which stretches to the Amundsen Gulf and the Nunavut border and includes the western Canadian Arctic Islands. The land was demarked in 1984 by the Inuvialuit Final Agreement. The majority of Inuit live in Nunavut (a territory of Canada), Nunavik (the northern part of Quebec) and in Nunatsiavut (Inuit settlement region in Labrador). The Iñupiat are Inuit of Alaska's Northwest Arctic and North Slope boroughs and the Bering Straits region, including the Seward Peninsula. Utqiaġvik, the northernmost city in the United States, is above the Arctic Circle and in the Iñupiat region. Their language is known as Iñupiaq. Their current communities include 34 villages across Iñupiat Nunaŋat (Iñupiaq lands) including seven Alaskan villages in the North Slope Borough, affiliated with the Arctic Slope Regional Corporation; eleven villages in Northwest Arctic Borough; and sixteen villages affiliated with the Bering Straits Regional Corporation. The Yupik are indigenous or aboriginal peoples who live along the coast of western Alaska, especially on the Yukon-Kuskokwim delta and along the Kuskokwim River (Central Alaskan Yup'ik); in southern Alaska (the Alutiiq); and along the eastern coast of Chukotka in the Russian Far East and St. Lawrence Island in western Alaska (the Siberian Yupik). The Yupik economy has traditionally been strongly dominated by the harvest of marine mammals, especially seals, walrus, and whales. The Alutiiq people (pronounced /əˈluːtɪk/ ə-LOO-tik in English; from Promyshlenniki Russian Алеутъ, "Aleut"; plural often "Alutiit"), also called by their ancestral name Sugpiaq (/ˈsʊɡˌbjɑːk/ SUUG-byahk or /ˈsʊɡpiˌæk/ SUUG-pee-AK; plural often "Sugpiat"), as well as Pacific Eskimo or Pacific Yupik, are one of eight groups of Alaska Natives that inhabit the southern-central coast of the region. The Alutiiq language is relatively close to that spoken by the Yupik in the Bethel, Alaska area. But, it is considered a distinct language with two major dialects: the Koniag dialect, spoken on the Alaska Peninsula and on Kodiak Island, and the Chugach dialect, spoken on the southern Kenai Peninsula and in Prince William Sound. Residents of Nanwalek, located on southern part of the Kenai Peninsula near Seldovia, speak what they call Sugpiaq. They are able to understand those who speak Yupik in Bethel. With a population of approximately 3,000, and the number of speakers in the hundreds, Alutiiq communities are working to revitalize their language. Yup'ik, with an apostrophe, denotes the speakers of the Central Alaskan Yup'ik language, who live in western Alaska and southwestern Alaska from southern Norton Sound to the north side of Bristol Bay, on the Yukon–Kuskokwim Delta, and on Nelson Island. The use of the apostrophe in the name Yup'ik is a written convention to denote the long pronunciation of the p sound; but it is spoken the same in other Yupik languages. Of all the Alaska Native languages, Central Alaskan Yup'ik has the most speakers, with about 10,000 of a total Yup'ik population of 21,000 still speaking the language. The five dialects of Central Alaskan Yup'ik include General Central Yup'ik, and the Egegik, Norton Sound, Hooper Bay-Chevak, and Nunivak dialects. In the latter two dialects, both the language and the people are called Cup'ik. Siberian Yupik reside along the Bering Sea coast of the Chukchi Peninsula in Siberia in the Russian Far East and in the villages of Gambell and Savoonga on St. Lawrence Island in Alaska. The Central Siberian Yupik spoken on the Chukchi Peninsula and on St. Lawrence Island is nearly identical. About 1,050 of a total Alaska population of 1,100 Siberian Yupik people in Alaska speak the language. It is the first language of the home for most St. Lawrence Island children. In Siberia, about 300 of a total of 900 Siberian Yupik people still learn and study the language, though it is no longer learned as a first language by children. About 70 of 400 Naukan people still speak Naukanski. The Naukan originate on the Chukot Peninsula in Chukotka Autonomous Okrug in Siberia. Despite the relatively small population of Naukan speakers, documentation of the language dates back to 1732. While Naukan is only spoken in Siberia, the language acts as an intermediate between two Alaskan languages: Siberian Yupik Eskimo and Central Yup'ik Eskimo. Some speakers of Siberian Yupik languages used to speak an Eskimo variant in the past, before they underwent a language shift. These former speakers of Sirenik Eskimo language inhabited the settlements of Sireniki, Imtuk, and some small villages stretching to the west from Sireniki along south-eastern coasts of Chukchi Peninsula. They lived in neighborhoods with Siberian Yupik and Chukchi peoples. As early as in 1895, Imtuk was a settlement with a mixed population of Sirenik Eskimos and Ungazigmit (the latter belonging to Siberian Yupik). Sirenik Eskimo culture has been influenced by that of Chukchi, and the language shows Chukchi language influences. Folktale motifs also show the influence of Chuckchi culture. The above peculiarities of this (already extinct) Eskimo language amounted to mutual unintelligibility even with its nearest language relatives: in the past, Sirenik Eskimos had to use the unrelated Chukchi language as a lingua franca for communicating with Siberian Yupik. Many words are formed from entirely different roots from in Siberian Yupik, but even the grammar has several peculiarities distinct not only among Eskimo languages, but even compared to Aleut. For example, dual number is not known in Sirenik Eskimo, while most Eskimo–Aleut languages have dual, including its neighboring Siberian Yupikax relatives. Little is known about the origin of this diversity. The peculiarities of this language may be the result of a supposed long isolation from other Eskimo groups, and being in contact only with speakers of unrelated languages for many centuries. The influence of the Chukchi language is clear. Because of all these factors, the classification of Sireniki Eskimo language is not settled yet: Sireniki language is sometimes regarded as a third branch of Eskimo (at least, its possibility is mentioned). Sometimes it is regarded rather as a group belonging to the Yupik branch.
[ { "paragraph_id": 0, "text": "Eskimo (/ˈɛskɪmoʊ/) is an exonym used to refer to two closely related Indigenous peoples: Inuit (including the Alaska Native Iñupiat, the Canadian Inuit, and the Greenlandic Inuit) and the Yupik (or Yuit) of eastern Siberia and Alaska. A related third group, the Aleut, which inhabit the Aleutian Islands, are generally excluded from the definition of Eskimo. The three groups share a relatively recent common ancestor, and speak related languages belonging to the Eskaleut language family.", "title": "" }, { "paragraph_id": 1, "text": "These circumpolar peoples have traditionally inhabited the Arctic and subarctic regions from eastern Siberia (Russia) to Alaska (United States), Northern Canada, Nunavik, Nunatsiavut, and Greenland.", "title": "" }, { "paragraph_id": 2, "text": "Many Inuit, Yupik, Aleut, and other individuals consider the term Eskimo, which is of a disputed etymology, to be offensive and even pejorative. Eskimo continues to be used within a historical, linguistic, archaeological, and cultural context. The governments in Canada and the United States have made moves to cease using the term Eskimo in official documents, but it has not been eliminated, as the word is in some places written into tribal, and therefore national, legal terminology. Canada officially uses the term Inuit to describe the indigenous Canadian people who are living in the country's northern sectors and are not First Nations or Métis. The United States government legally uses Alaska Native for Native Alaskans including the Yupik, Inuit, and Aleut, but also for non-Eskimo Native Alaskans including the Tlingit, the Haida, the Eyak, and the Tsimshian, in addition to at least nine separate northern Athabaskan/Dene peoples. The designation Alaska Native applies to enrolled tribal members only, in contrast to individual Eskimo/Aleut persons claiming descent from the world's \"most widespread aboriginal group\".", "title": "" }, { "paragraph_id": 3, "text": "There are between 171,000 and 187,000 Inuit and Yupik, the majority of whom live in or near their traditional circumpolar homeland. Of these, 53,785 (2010) live in the United States, 65,025 (2016) in Canada, 51,730 (2021) in Greenland and 1657 (2021) in Russia. In addition, 16,730 people living in Denmark were born in Greenland. The non-governmental organization (NGO) known as the Inuit Circumpolar Council claims to represent 180,000 people.", "title": "" }, { "paragraph_id": 4, "text": "The non-Inuit sub-branch of the Eskimo branch of the Eskaleut language family consists of four distinct Yupik languages. Two of them are used in the Russian Far East as well as on St. Lawrence Island, and two of them are used in western Alaska, southwestern Alaska, and the western part of Southcentral Alaska. The extinct language of the Sirenik people is sometimes claimed to be related to these other languages.", "title": "" }, { "paragraph_id": 5, "text": "A variety of theories have been postulated for the etymological origin of the word Eskimo. According to Smithsonian linguist Ives Goddard, etymologically the word derives from the Innu-aimun (Montagnais) word ayas̆kimew, meaning \"a person who laces a snowshoe\", and is related to husky (a breed of dog). The word assime·w means \"she laces a snowshoe\" in Innu, and Innu language speakers refer to the neighbouring Mi'kmaq people using words that sound like eskimo. This interpretation is generally confirmed by more recent academic sources.", "title": "Nomenclature" }, { "paragraph_id": 6, "text": "In 1978, José Mailhot, a Quebec anthropologist who speaks Innu-aimun (Montagnais), published a paper suggesting that Eskimo meant \"people who speak a different language\". French traders who encountered the Innu (Montagnais) in the eastern areas adopted their word for the more western peoples and spelled it as Esquimau or Esquimaux in a transliteration.", "title": "Nomenclature" }, { "paragraph_id": 7, "text": "Some people consider Eskimo offensive, because it is popularly perceived to mean \"eaters of raw meat\" in Algonquian languages common to people along the Atlantic coast. An unnamed Cree speaker suggested the original word that became corrupted to Eskimo might have been askamiciw (meaning \"he eats it raw\"); Inuit are referred to in some Cree texts as askipiw (meaning \"eats something raw\"). Regardless, the term still carries a derogatory connotation for many Inuit and Yupik.", "title": "Nomenclature" }, { "paragraph_id": 8, "text": "One of the first printed uses of the French word Esquimaux comes from Samuel Hearne's A Journey from Prince of Wales's Fort in Hudson's Bay to the Northern Ocean in the Years 1769, 1770, 1771, 1772 first published in 1795.", "title": "Nomenclature" }, { "paragraph_id": 9, "text": "The term Eskimo is still used by people to encompass Inuit and Yupik, as well as other Indigenous or Alaska Native and Siberian peoples. In the 21st century, usage in North America has declined. Linguistic, ethnic, and cultural differences exist between Yupik and Inuit.", "title": "Nomenclature" }, { "paragraph_id": 10, "text": "In Canada and Greenland, and to a certain extent in Alaska, the term Eskimo is predominantly seen as offensive and has been widely replaced by the term Inuit or terms specific to a particular group or community. This has resulted in a trend whereby some non-Indigenous people believe that they should use Inuit even for Yupik who are non-Inuit.", "title": "Nomenclature" }, { "paragraph_id": 11, "text": "Greenlandic Inuit generally refer to themselves as Greenlanders (\"Kalaallit\" or \"Grønlændere\") and speak the Greenlandic language and Danish. Greenlandic Inuit belong to three groups: the Kalaallit of west Greenland, who speak Kalaallisut; the Tunumiit of Tunu (east Greenland), who speak Tunumiit oraasiat (\"East Greenlandic\"); and the Inughuit of north Greenland, who speak Inuktun.", "title": "Nomenclature" }, { "paragraph_id": 12, "text": "The word \"Eskimo\" is a racially charged term in Canada. In Canada's Central Arctic, Inuinnaq is the preferred term, and in the eastern Canadian Arctic Inuit. The language is often called Inuktitut, though other local designations are also used.", "title": "Nomenclature" }, { "paragraph_id": 13, "text": "Section 25 of the Canadian Charter of Rights and Freedoms and section 35 of the Canadian Constitution Act of 1982 recognized Inuit as a distinctive group of Aboriginal peoples in Canada. Although Inuit can be applied to all of the Eskimo peoples in Canada and Greenland, that is not true in Alaska and Siberia. In Alaska, the term Eskimo is still used because it includes both Iñupiat (singular: Iñupiaq), who are Inuit, and Yupik, who are not.", "title": "Nomenclature" }, { "paragraph_id": 14, "text": "The term Alaska Native is inclusive of (and under U.S. and Alaskan law, as well as the linguistic and cultural legacy of Alaska, refers to) all Indigenous peoples of Alaska, including not only the Iñupiat (Alaskan Inuit) and the Yupik, but also groups such as the Aleut, who share a recent ancestor, as well as the largely unrelated indigenous peoples of the Pacific Northwest Coast and the Alaskan Athabaskans, such as the Eyak people. The term Alaska Native has important legal usage in Alaska and the rest of the United States as a result of the Alaska Native Claims Settlement Act of 1971. It does not apply to Inuit or Yupik originating outside the state. As a result, the term Eskimo is still in use in Alaska. Alternative terms, such as Inuit-Yupik, have been proposed, but none has gained widespread acceptance. Early 21st century population estimates registered more than 135,000 individuals of Eskimo descent, with approximately 85,000 living in North America, 50,000 in Greenland, and the rest residing in Siberia.", "title": "Nomenclature" }, { "paragraph_id": 15, "text": "In 1977, the Inuit Circumpolar Conference (ICC) meeting in Utqiaġvik, Alaska, officially adopted Inuit as a designation for all circumpolar Native peoples, regardless of their local view on an appropriate term. They voted to replace the word Eskimo with Inuit. Even at that time, such a designation was not accepted by all. As a result, the Canadian government usage has replaced the term Eskimo with Inuit (Inuk in singular).", "title": "Nomenclature" }, { "paragraph_id": 16, "text": "The ICC charter defines Inuit as including \"the Inupiat, Yupik (Alaska), Inuit, Inuvialuit (Canada), Kalaallit (Greenland) and Yupik (Russia)\". Despite the ICC's 1977 decision to adopt the term Inuit, this has not been accepted by all or even most Yupik people.", "title": "Nomenclature" }, { "paragraph_id": 17, "text": "In 2010, the ICC passed a resolution in which they implored scientists to use Inuit and Paleo-Inuit instead of Eskimo or Paleo-Eskimo.", "title": "Nomenclature" }, { "paragraph_id": 18, "text": "In a 2015 commentary in the journal Arctic, Canadian archaeologist Max Friesen argued fellow Arctic archaeologists should follow the ICC and use Paleo-Inuit instead of Paleo-Eskimo. In 2016, Lisa Hodgetts and Arctic editor Patricia Wells wrote: \"In the Canadian context, continued use of any term that incorporates Eskimo is potentially harmful to the relationships between archaeologists and the Inuit and Inuvialuit communities who are our hosts and increasingly our research partners.\"", "title": "Nomenclature" }, { "paragraph_id": 19, "text": "Hodgetts and Wells suggested using more specific terms when possible (e.g., Dorset and Groswater) and agreed with Frieson in using the Inuit tradition to replace Neo-Eskimo, although they noted replacement for Palaeoeskimo was still an open question and discussed Paleo-Inuit, Arctic Small Tool Tradition, and pre-Inuit, as well as Inuktitut loanwords like Tuniit and Sivullirmiut, as possibilities.", "title": "Nomenclature" }, { "paragraph_id": 20, "text": "In 2020, Katelyn Braymer-Hayes and colleagues argued in the Journal of Anthropological Archaeology that there is a \"clear need\" to replace the terms Neo-Eskimo and Paleo-Eskimo, citing the ICC resolution, but finding a consensus within the Alaskan context particularly is difficult, since Alaska Natives do not use the word Inuit to describe themselves nor is the term legally applicable only to Iñupiat and Yupik in Alaska, and as such, terms used in Canada like Paleo Inuit and Ancestral Inuit would not be acceptable.", "title": "Nomenclature" }, { "paragraph_id": 21, "text": "American linguist Lenore Grenoble has also explicitly deferred to the ICC resolution and used Inuit–Yupik instead of Eskimo with regards to the language branch.", "title": "Nomenclature" }, { "paragraph_id": 22, "text": "Genetic evidence suggests that the Americas were populated from northeastern Asia in multiple waves. While the great majority of indigenous American peoples can be traced to a single early migration of Paleo-Indians, the Na-Dené, Inuit and Indigenous Alaskan populations exhibit admixture from distinct populations that migrated into America at a later date and are closely linked to the peoples of far northeastern Asia (e.g. Chukchi), and only more remotely to the majority indigenous American type. For modern Eskimo–Aleut speakers, this later ancestral component makes up almost half of their genomes. The ancient Paleo-Eskimo population was genetically distinct from the modern circumpolar populations, but eventually derives from the same far northeastern Asian cluster. It is understood that some or all of these ancient people migrated across the Chukchi Sea to North America during the pre-neolithic era, somewhere around 5,000 to 10,000 years ago. It is believed that ancestors of the Aleut people inhabited the Aleutian Chain 10,000 years ago.", "title": "History" }, { "paragraph_id": 23, "text": "The earliest positively identified Paleo-Eskimo cultures (Early Paleo-Eskimo) date to 5,000 years ago. Several earlier indigenous peoples existed in the northern circumpolar regions of eastern Siberia, Alaska, and Canada (although probably not in Greenland). The Paleo-Eskimo peoples appear to have developed in Alaska from people related to the Arctic small tool tradition in eastern Asia, whose ancestors had probably migrated to Alaska at least 3,000 to 5,000 years earlier.", "title": "History" }, { "paragraph_id": 24, "text": "The Yupik languages and cultures in Alaska evolved in place, beginning with the original pre-Dorset Indigenous culture developed in Alaska. At least 4,000 years ago, the Unangan culture of the Aleut became distinct. It is not generally considered an Eskimo culture. However, there is some possibility of an Aleutian origin of the Dorset people, who in turn are a likely ancestor of today's Inuit and Yupik.", "title": "History" }, { "paragraph_id": 25, "text": "Approximately 1,500 to 2,000 years ago, apparently in northwestern Alaska, two other distinct variations appeared. Inuit language became distinct and, over a period of several centuries, its speakers migrated across northern Alaska, through Canada, and into Greenland. The distinct culture of the Thule people (drawing strongly from the Birnirk culture) developed in northwestern Alaska. It very quickly spread over the entire area occupied by Eskimo peoples, though it was not necessarily adopted by all of them.", "title": "History" }, { "paragraph_id": 26, "text": "The Eskimo–Aleut family of languages includes two cognate branches: the Aleut (Unangan) branch and the Eskimo branch.", "title": "Languages" }, { "paragraph_id": 27, "text": "The number of cases varies, with Aleut languages having a greatly reduced case system compared to those of the Eskimo subfamily. Eskimo–Aleut languages possess voiceless plosives at the bilabial, coronal, velar and uvular positions in all languages except Aleut, which has lost the bilabial stops but retained the nasal. In the Eskimo subfamily a voiceless alveolar lateral fricative is also present.", "title": "Languages" }, { "paragraph_id": 28, "text": "The Eskimo sub-family consists of the Inuit language and Yupik language sub-groups. The Sirenikski language, which is virtually extinct, is sometimes regarded as a third branch of the Eskimo language family. Other sources regard it as a group belonging to the Yupik branch.", "title": "Languages" }, { "paragraph_id": 29, "text": "Inuit languages comprise a dialect continuum, or dialect chain, that stretches from Unalakleet and Norton Sound in Alaska, across northern Alaska and Canada, and east to Greenland. Changes from western (Iñupiaq) to eastern dialects are marked by the dropping of vestigial Yupik-related features, increasing consonant assimilation (e.g., kumlu, meaning \"thumb\", changes to kuvlu, changes to kublu, changes to kulluk, changes to kulluq,) and increased consonant lengthening, and lexical change. Thus, speakers of two adjacent Inuit dialects would usually be able to understand one another, but speakers from dialects distant from each other on the dialect continuum would have difficulty understanding one another. Seward Peninsula dialects in western Alaska, where much of the Iñupiat culture has been in place for perhaps less than 500 years, are greatly affected by phonological influence from the Yupik languages. Eastern Greenlandic, at the opposite end of Inuit range, has had significant word replacement due to a unique form of ritual name avoidance.", "title": "Languages" }, { "paragraph_id": 30, "text": "Ethnographically, Greenlandic Inuit belong to three groups: the Kalaallit of west Greenland, who speak Kalaallisut; the Tunumiit of Tunu (east Greenland), who speak Tunumiit oraasiat (\"East Greenlandic\"), and the Inughuit of north Greenland, who speak Inuktun.", "title": "Languages" }, { "paragraph_id": 31, "text": "The four Yupik languages, by contrast, including Alutiiq (Sugpiaq), Central Alaskan Yup'ik, Naukan (Naukanski), and Siberian Yupik, are distinct languages with phonological, morphological, and lexical differences. They demonstrate limited mutual intelligibility. Additionally, both Alutiiq and Central Yup'ik have considerable dialect diversity. The northernmost Yupik languages – Siberian Yupik and Naukan Yupik – are linguistically only slightly closer to Inuit than is Alutiiq, which is the southernmost of the Yupik languages. Although the grammatical structures of Yupik and Inuit languages are similar, they have pronounced differences phonologically. Differences of vocabulary between Inuit and any one of the Yupik languages are greater than between any two Yupik languages. Even the dialectal differences within Alutiiq and Central Alaskan Yup'ik sometimes are relatively great for locations that are relatively close geographically.", "title": "Languages" }, { "paragraph_id": 32, "text": "Despite the relatively small population of Naukan speakers, documentation of the language dates back to 1732. While Naukan is only spoken in Siberia, the language acts as an intermediate between two Alaskan languages: Siberian Yupik Eskimo and Central Yup'ik Eskimo.", "title": "Languages" }, { "paragraph_id": 33, "text": "The Sirenikski language is sometimes regarded as a third branch of the Eskimo language family, but other sources regard it as a group belonging to the Yupik branch.", "title": "Languages" }, { "paragraph_id": 34, "text": "An overview of the Eskimo–Aleut languages family is given below:", "title": "Languages" }, { "paragraph_id": 35, "text": "American linguist Lenore Grenoble has explicitly deferred to this resolution and used Inuit–Yupik instead of Eskimo with regards to the language branch.", "title": "Languages" }, { "paragraph_id": 36, "text": "There has been a long-running linguistic debate about whether or not the speakers of the Eskimo-Aleut language group have an unusually large number of words for snow. The general modern consensus is that, in multiple Eskimo languages, there are, or have been in simultaneous usage, indeed fifty plus words for snow.", "title": "Languages" }, { "paragraph_id": 37, "text": "Historically Inuit cuisine, which is taken here to include Greenlandic cuisine, Yup'ik cuisine and Aleut cuisine, consisted of a diet of animal source foods that were fished, hunted, and gathered locally.", "title": "Diet" }, { "paragraph_id": 38, "text": "Inuit inhabit the Arctic and northern Bering Sea coasts of Alaska in the United States, and Arctic coasts of the Northwest Territories, Nunavut, Quebec, and Labrador in Canada, and Greenland (associated with Denmark). Until fairly recent times, there has been a remarkable homogeneity in the culture throughout this area, which traditionally relied on fish, marine mammals, and land animals for food, heat, light, clothing, and tools. Their food sources primarily relied on seals, whales, whale blubber, walrus, and fish, all of which they hunted using harpoons on the ice. Clothing consisted of robes made of wolfskin and reindeer skin to acclimate to the low temperatures. They maintain a unique Inuit culture.", "title": "Inuit" }, { "paragraph_id": 39, "text": "Greenlandic Inuit make up 90% of Greenland's population. They belong to three major groups:", "title": "Inuit" }, { "paragraph_id": 40, "text": "Canadian Inuit live primarily in Inuit Nunangat (lit. \"lands, waters and ices of the [Inuit] people\"), their traditional homeland although some people live in southern parts of Canada. Inuit Nunangat ranges from the Yukon–Alaska border in the west across the Arctic to northern Labrador.", "title": "Inuit" }, { "paragraph_id": 41, "text": "The Inuvialuit live in the Inuvialuit Settlement Region, the northern part of Yukon and the Northwest Territories, which stretches to the Amundsen Gulf and the Nunavut border and includes the western Canadian Arctic Islands. The land was demarked in 1984 by the Inuvialuit Final Agreement.", "title": "Inuit" }, { "paragraph_id": 42, "text": "The majority of Inuit live in Nunavut (a territory of Canada), Nunavik (the northern part of Quebec) and in Nunatsiavut (Inuit settlement region in Labrador).", "title": "Inuit" }, { "paragraph_id": 43, "text": "The Iñupiat are Inuit of Alaska's Northwest Arctic and North Slope boroughs and the Bering Straits region, including the Seward Peninsula. Utqiaġvik, the northernmost city in the United States, is above the Arctic Circle and in the Iñupiat region. Their language is known as Iñupiaq. Their current communities include 34 villages across Iñupiat Nunaŋat (Iñupiaq lands) including seven Alaskan villages in the North Slope Borough, affiliated with the Arctic Slope Regional Corporation; eleven villages in Northwest Arctic Borough; and sixteen villages affiliated with the Bering Straits Regional Corporation.", "title": "Inuit" }, { "paragraph_id": 44, "text": "The Yupik are indigenous or aboriginal peoples who live along the coast of western Alaska, especially on the Yukon-Kuskokwim delta and along the Kuskokwim River (Central Alaskan Yup'ik); in southern Alaska (the Alutiiq); and along the eastern coast of Chukotka in the Russian Far East and St. Lawrence Island in western Alaska (the Siberian Yupik). The Yupik economy has traditionally been strongly dominated by the harvest of marine mammals, especially seals, walrus, and whales.", "title": "Yupik" }, { "paragraph_id": 45, "text": "The Alutiiq people (pronounced /əˈluːtɪk/ ə-LOO-tik in English; from Promyshlenniki Russian Алеутъ, \"Aleut\"; plural often \"Alutiit\"), also called by their ancestral name Sugpiaq (/ˈsʊɡˌbjɑːk/ SUUG-byahk or /ˈsʊɡpiˌæk/ SUUG-pee-AK; plural often \"Sugpiat\"), as well as Pacific Eskimo or Pacific Yupik, are one of eight groups of Alaska Natives that inhabit the southern-central coast of the region.", "title": "Yupik" }, { "paragraph_id": 46, "text": "The Alutiiq language is relatively close to that spoken by the Yupik in the Bethel, Alaska area. But, it is considered a distinct language with two major dialects: the Koniag dialect, spoken on the Alaska Peninsula and on Kodiak Island, and the Chugach dialect, spoken on the southern Kenai Peninsula and in Prince William Sound. Residents of Nanwalek, located on southern part of the Kenai Peninsula near Seldovia, speak what they call Sugpiaq. They are able to understand those who speak Yupik in Bethel. With a population of approximately 3,000, and the number of speakers in the hundreds, Alutiiq communities are working to revitalize their language.", "title": "Yupik" }, { "paragraph_id": 47, "text": "Yup'ik, with an apostrophe, denotes the speakers of the Central Alaskan Yup'ik language, who live in western Alaska and southwestern Alaska from southern Norton Sound to the north side of Bristol Bay, on the Yukon–Kuskokwim Delta, and on Nelson Island. The use of the apostrophe in the name Yup'ik is a written convention to denote the long pronunciation of the p sound; but it is spoken the same in other Yupik languages. Of all the Alaska Native languages, Central Alaskan Yup'ik has the most speakers, with about 10,000 of a total Yup'ik population of 21,000 still speaking the language. The five dialects of Central Alaskan Yup'ik include General Central Yup'ik, and the Egegik, Norton Sound, Hooper Bay-Chevak, and Nunivak dialects. In the latter two dialects, both the language and the people are called Cup'ik.", "title": "Yupik" }, { "paragraph_id": 48, "text": "Siberian Yupik reside along the Bering Sea coast of the Chukchi Peninsula in Siberia in the Russian Far East and in the villages of Gambell and Savoonga on St. Lawrence Island in Alaska. The Central Siberian Yupik spoken on the Chukchi Peninsula and on St. Lawrence Island is nearly identical. About 1,050 of a total Alaska population of 1,100 Siberian Yupik people in Alaska speak the language. It is the first language of the home for most St. Lawrence Island children. In Siberia, about 300 of a total of 900 Siberian Yupik people still learn and study the language, though it is no longer learned as a first language by children.", "title": "Yupik" }, { "paragraph_id": 49, "text": "About 70 of 400 Naukan people still speak Naukanski. The Naukan originate on the Chukot Peninsula in Chukotka Autonomous Okrug in Siberia. Despite the relatively small population of Naukan speakers, documentation of the language dates back to 1732. While Naukan is only spoken in Siberia, the language acts as an intermediate between two Alaskan languages: Siberian Yupik Eskimo and Central Yup'ik Eskimo.", "title": "Yupik" }, { "paragraph_id": 50, "text": "Some speakers of Siberian Yupik languages used to speak an Eskimo variant in the past, before they underwent a language shift. These former speakers of Sirenik Eskimo language inhabited the settlements of Sireniki, Imtuk, and some small villages stretching to the west from Sireniki along south-eastern coasts of Chukchi Peninsula. They lived in neighborhoods with Siberian Yupik and Chukchi peoples.", "title": "Sirenik Eskimos" }, { "paragraph_id": 51, "text": "As early as in 1895, Imtuk was a settlement with a mixed population of Sirenik Eskimos and Ungazigmit (the latter belonging to Siberian Yupik). Sirenik Eskimo culture has been influenced by that of Chukchi, and the language shows Chukchi language influences. Folktale motifs also show the influence of Chuckchi culture.", "title": "Sirenik Eskimos" }, { "paragraph_id": 52, "text": "The above peculiarities of this (already extinct) Eskimo language amounted to mutual unintelligibility even with its nearest language relatives: in the past, Sirenik Eskimos had to use the unrelated Chukchi language as a lingua franca for communicating with Siberian Yupik.", "title": "Sirenik Eskimos" }, { "paragraph_id": 53, "text": "Many words are formed from entirely different roots from in Siberian Yupik, but even the grammar has several peculiarities distinct not only among Eskimo languages, but even compared to Aleut. For example, dual number is not known in Sirenik Eskimo, while most Eskimo–Aleut languages have dual, including its neighboring Siberian Yupikax relatives.", "title": "Sirenik Eskimos" }, { "paragraph_id": 54, "text": "Little is known about the origin of this diversity. The peculiarities of this language may be the result of a supposed long isolation from other Eskimo groups, and being in contact only with speakers of unrelated languages for many centuries. The influence of the Chukchi language is clear.", "title": "Sirenik Eskimos" }, { "paragraph_id": 55, "text": "Because of all these factors, the classification of Sireniki Eskimo language is not settled yet: Sireniki language is sometimes regarded as a third branch of Eskimo (at least, its possibility is mentioned). Sometimes it is regarded rather as a group belonging to the Yupik branch.", "title": "Sirenik Eskimos" } ]
Eskimo is an exonym used to refer to two closely related Indigenous peoples: Inuit and the Yupik of eastern Siberia and Alaska. A related third group, the Aleut, which inhabit the Aleutian Islands, are generally excluded from the definition of Eskimo. The three groups share a relatively recent common ancestor, and speak related languages belonging to the Eskaleut language family. These circumpolar peoples have traditionally inhabited the Arctic and subarctic regions from eastern Siberia (Russia) to Alaska, Northern Canada, Nunavik, Nunatsiavut, and Greenland. Many Inuit, Yupik, Aleut, and other individuals consider the term Eskimo, which is of a disputed etymology, to be offensive and even pejorative. Eskimo continues to be used within a historical, linguistic, archaeological, and cultural context. The governments in Canada and the United States have made moves to cease using the term Eskimo in official documents, but it has not been eliminated, as the word is in some places written into tribal, and therefore national, legal terminology. Canada officially uses the term Inuit to describe the indigenous Canadian people who are living in the country's northern sectors and are not First Nations or Métis. The United States government legally uses Alaska Native for Native Alaskans including the Yupik, Inuit, and Aleut, but also for non-Eskimo Native Alaskans including the Tlingit, the Haida, the Eyak, and the Tsimshian, in addition to at least nine separate northern Athabaskan/Dene peoples. The designation Alaska Native applies to enrolled tribal members only, in contrast to individual Eskimo/Aleut persons claiming descent from the world's "most widespread aboriginal group". There are between 171,000 and 187,000 Inuit and Yupik, the majority of whom live in or near their traditional circumpolar homeland. Of these, 53,785 (2010) live in the United States, 65,025 (2016) in Canada, 51,730 (2021) in Greenland and 1657 (2021) in Russia. In addition, 16,730 people living in Denmark were born in Greenland. The non-governmental organization (NGO) known as the Inuit Circumpolar Council claims to represent 180,000 people. The non-Inuit sub-branch of the Eskimo branch of the Eskaleut language family consists of four distinct Yupik languages. Two of them are used in the Russian Far East as well as on St. Lawrence Island, and two of them are used in western Alaska, southwestern Alaska, and the western part of Southcentral Alaska. The extinct language of the Sirenik people is sometimes claimed to be related to these other languages.
2001-05-23T07:28:00Z
2023-12-30T08:33:57Z
[ "Template:Main", "Template:Webarchive", "Template:Wiktionary", "Template:Commons category", "Template:Short description", "Template:Other uses", "Template:Citation needed", "Template:Hsp", "Template:Authority control", "Template:Cite book", "Template:Cite news", "Template:Cite journal", "Template:Cite dictionary", "Template:IPAc-en", "Template:Further", "Template:Div col", "Template:Reflist", "Template:Ethnic slurs", "Template:Externalvideo", "Template:Infobox ethnic group", "Template:Div col end", "Template:Cite encyclopedia", "Template:Dead link", "Template:Excerpt", "Template:Distinguish", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Eskimo
9,496
Epiphenomenalism
Epiphenomenalism is a position on the mind–body problem which holds that physical and biochemical events within the human body (sense organs, neural impulses, and muscle contractions, for example) are the sole cause of mental events (thought, consciousness, and cognition). According to this view, subjective mental events are completely dependent for their existence on corresponding physical and biochemical events within the human body, yet themselves have no influence over physical events. The appearance that subjective mental states (such as intentions) influence physical events is merely an illusion. For instance, fear seems to make the heart beat faster, but according to epiphenomenalism the biochemical secretions of the brain and nervous system (such as adrenaline)—not the experience of fear—is what raises the heartbeat. Because mental events are a kind of overflow that cannot cause anything physical, yet have non-physical properties, epiphenomenalism is viewed as a form of property dualism. During the seventeenth century, René Descartes argued that animals are subject to mechanical laws of nature. He defended the idea of automatic behavior, or the performance of actions without conscious thought. Descartes questioned how the immaterial mind and the material body can interact causally. His interactionist model (1649) held that the body relates to the mind through the pineal gland. La Mettrie, Leibniz, and Spinoza all in their own way began this way of thinking. The idea that even if the animal were conscious nothing would be added to the production of behavior, even in animals of the human type, was first voiced by La Mettrie (1745), and then by Cabanis (1802), and was further explicated by Hodgson (1870) and Huxley (1874). Thomas Henry Huxley agreed with Descartes that behavior is determined solely by physical mechanisms, but he also believed that humans enjoy an intelligent life. In 1874, Huxley argued, in the Presidential Address to the British Association for the Advancement of Science, that animals are conscious automata. Huxley proposed that psychical changes are collateral products of physical changes. Like the bell of a clock that has no role in keeping the time, consciousness has no role in determining behavior. Huxley defended automatism by testing reflex actions, originally supported by Descartes. Huxley hypothesized that frogs that undergo lobotomy would swim when thrown into water, despite being unable to initiate actions. He argued that the ability to swim was solely dependent on the molecular change in the brain, concluding that consciousness is not necessary for reflex actions. According to epiphenomenalism, animals experience pain only as a result of neurophysiology. In 1870, Huxley conducted a case study on a French soldier who had sustained a shot in the Franco-Prussian War that fractured his left parietal bone. Every few weeks the soldier would enter a trance-like state, smoking, dressing himself, and aiming his cane like a rifle all while being insensitive to pins, electric shocks, odorous substances, vinegar, noise, and certain light conditions. Huxley used this study to show that consciousness was not necessary to execute these purposeful actions, justifying the assumption that humans are insensible machines. Huxley's mechanistic attitude towards the body convinced him that the brain alone causes behavior. In the early 1900s scientific behaviorists such as Ivan Pavlov, John B. Watson, and B. F. Skinner began the attempt to uncover laws describing the relationship between stimuli and responses, without reference to inner mental phenomena. Instead of adopting a form of eliminativism or mental fictionalism, positions that deny that inner mental phenomena exist, a behaviorist was able to adopt epiphenomenalism in order to allow for the existence of mind. George Santayana (1905) believed that all motion has merely physical causes. Because consciousness is accessory to life and not essential to it, natural selection is responsible for ingraining tendencies to avoid certain contingencies without any conscious achievement involved. By the 1960s, scientific behaviourism met substantial difficulties and eventually gave way to the cognitive revolution. Participants in that revolution, such as Jerry Fodor, reject epiphenomenalism and insist upon the efficacy of the mind. Fodor even speaks of "epiphobia"—fear that one is becoming an epiphenomenalist. However, since the cognitive revolution, there have been several who have argued for a version of epiphenomenalism. In 1970, Keith Campbell proposed his "new epiphenomenalism", which states that the body produces a spiritual mind that does not act on the body. How the brain causes a spiritual mind, according to Campbell, is destined to remain beyond our understanding forever (see New mysterianism). In 2001, David Chalmers and Frank Jackson argued that claims about conscious states should be deduced a priori from claims about physical states alone. They offered that epiphenomenalism bridges, but does not close, the explanatory gap between the physical and the phenomenal realms. These more recent versions maintain that only the subjective, qualitative aspects of mental states are epiphenomenal. Imagine both Pierre and a robot eating a cupcake. Unlike the robot, Pierre is conscious of eating the cupcake while the behavior is under way. This subjective experience is often called a quale (plural qualia), and it describes the private "raw feel" or the subjective "what-it-is-like" that is the inner accompaniment of many mental states. Thus, while Pierre and the robot are both doing the same thing, only Pierre has the inner conscious experience. Frank Jackson (1982), for example, once espoused the following view: I am what is sometimes known as a "qualia freak". I think that there are certain features of bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes. Tell me everything physical there is to tell about what is going on in a living brain... you won't have told me about the hurtfulness of pains, the itchiness of itches, pangs of jealousy.... According to epiphenomenalism, mental states like Pierre's pleasurable experience—or, at any rate, their distinctive qualia—are epiphenomena; they are side-effects or by-products of physical processes in the body. If Pierre takes a second bite, it is not caused by his pleasure from the first; If Pierre says, "That was good, so I will take another bite", his speech act is not caused by the preceding pleasure. The conscious experiences that accompany brain processes are causally impotent. The mind might simply be a byproduct of other properties such as brain size or pathway activation synchronicity, which are adaptive. Some thinkers draw distinctions between different varieties of epiphenomenalism. In Consciousness Explained, Daniel Dennett distinguishes between a purely metaphysical sense of epiphenomenalism, in which the epiphenomenon has no causal impact at all, and Huxley's "steam whistle" epiphenomenalism, in which effects exist but are not functionally relevant. A large body of neurophysiological data seems to support epiphenomenalism . Some of the oldest such data is the Bereitschaftspotential or "readiness potential" in which electrical activity related to voluntary actions can be recorded up to two seconds before the subject is aware of making a decision to perform the action. More recently Benjamin Libet et al. (1979) have shown that it can take 0.5 seconds before a stimulus becomes part of conscious experience even though subjects can respond to the stimulus in reaction time tests within 200 milliseconds. The methods and conclusions of this experiment have received much criticism (e.g., see the many critical commentaries in Libet's (1985) target article), including fairly recently by neuroscientists such as Peter Tse, who claim to show that the readiness potential has nothing to do with consciousness at all. Recent research on the Event Related Potential also shows that conscious experience does not occur until the late phase of the potential (P3 or later) that occurs 300 milliseconds or more after the event. In Bregman's auditory continuity illusion, where a pure tone is followed by broadband noise and the noise is followed by the same pure tone it seems as if the tone occurs throughout the period of noise. This also suggests a delay for processing data before conscious experience occurs. Popular science author Tor Nørretranders has called the delay the "user illusion", implying that we only have the illusion of conscious control, most actions being controlled automatically by non-conscious parts of the brain with the conscious mind relegated to the role of spectator. The scientific data seem to support the idea that conscious experience is created by non-conscious processes in the brain (i.e., there is subliminal processing that becomes conscious experience). These results have been interpreted to suggest that people are capable of action before conscious experience of the decision to act occurs. Some argue that this supports epiphenomenalism, since it shows that the feeling of making a decision to act is actually an epiphenomenon; the action happens before the decision, so the decision did not cause the action to occur. The most powerful argument against epiphenomenalism is that it is self-contradictory: if we have knowledge about epiphenomenalism, then our brains know about the existence of the mind, but if epiphenomenalism were correct, then our brains should not have any knowledge about the mind, because the mind does not affect anything physical. However, some philosophers do not accept this as a rigorous refutation. For example, Victor Argonov states that epiphenomenalism is a questionable, but experimentally falsifiable theory. He argues that the personal mind is not the only source of knowledge about the existence of mind in the world. A creature (even a philosophical zombie) could have knowledge about mind and the mind-body problem by virtue of some innate knowledge. The information about mind (and its problematic properties such as qualia and the hard problem of consciousness) could have been, in principle, implicitly "written" in the material world since its creation. Epiphenomenalists can say that God created an immaterial mind and a detailed "program" of material human behavior that makes it possible to speak about the mind–body problem. That version of epiphenomenalism seems highly exotic, but it cannot be excluded from consideration by pure theory. However, Argonov suggests that experiments could refute epiphenomenalism. In particular, epiphenomenalism could be refuted if neural correlates of consciousness can be found in the human brain, and it is proven that human speech about consciousness is caused by them. Some philosophers, such as Dennett, reject both epiphenomenalism and the existence of qualia with the same charge that Gilbert Ryle leveled against a Cartesian "ghost in the machine", that they too are category mistakes. A quale or conscious experience would not belong to the category of objects of reference on this account, but rather to the category of ways of doing things. Functionalists assert that mental states are well described by their overall role, their activity in relation to the organism as a whole. "This doctrine is rooted in Aristotle's conception of the soul, and has antecedents in Hobbes's conception of the mind as a 'calculating machine', but it has become fully articulated (and popularly endorsed) only in the last third of the 20th century." In so far as it mediates stimulus and response, a mental function is analogous to a program that processes input/output in automata theory. In principle, multiple realisability would guarantee platform dependencies can be avoided, whether in terms of hardware and operating system or, ex hypothesi, biology and philosophy. Because a high-level language is a practical requirement for developing the most complex programs, functionalism implies that a non-reductive physicalism would offer a similar advantage over a strictly eliminative materialism. Eliminative materialists believe "folk psychology" is so unscientific that, ultimately, it will be better to eliminate primitive concepts such as mind, desire and belief, in favor of a future neuro-scientific account. A more moderate position such as J. L. Mackie's error theory suggests that false beliefs should be stripped away from a mental concept without eliminating the concept itself, the legitimate core meaning being left intact. Benjamin Libet's results are quoted in favor of epiphenomenalism, but he believes subjects still have a "conscious veto", since the readiness potential does not invariably lead to an action. In Freedom Evolves, Daniel Dennett argues that a no-free-will conclusion is based on dubious assumptions about the location of consciousness, as well as questioning the accuracy and interpretation of Libet's results. Similar criticism of Libet-style research has been made by neuroscientist Adina Roskies and cognitive theorists Tim Bayne and Alfred Mele. Others have argued that data such as the Bereitschaftspotential undermine epiphenomenalism for the same reason, that such experiments rely on a subject reporting the point in time at which a conscious experience and a conscious decision occurs, thus relying on the subject to be able to consciously perform an action. That ability would seem to be at odds with early epiphenomenalism, which according to Huxley is the broad claim that consciousness is "completely without any power… as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". Mind–body dualists reject epiphenomenalism on the same grounds. Adrian G. Guggisberg and Annaïs Mottaz have also challenged those findings. A study by Aaron Schurger and colleagues published in PNAS challenged assumptions about the causal nature of the readiness potential itself (and the "pre-movement buildup" of neural activity in general), thus denying the conclusions drawn from studies such as Libet's and Fried's. In favor of interactionism, Celia Green (2003) argues that epiphenomenalism does not even provide a satisfactory solution to the problem of interaction posed by substance dualism. Although it does not entail substance dualism, according to Green, epiphenomenalism implies a one-way form of interactionism that is just as hard to conceive of as the two-way form embodied in substance dualism. Green suggests the assumption that it is less of a problem may arise from the unexamined belief that physical events have some sort of primacy over mental ones. A number of scientists and philosophers, including William James, Karl Popper, John C. Eccles and Donald Symons, dismiss epiphenomenalism from an evolutionary perspective. They point out that the view that mind is an epiphenomenon of brain activity is not consistent with evolutionary theory, because if mind were functionless, it would have disappeared long ago, as it would not have been favoured by evolution.
[ { "paragraph_id": 0, "text": "Epiphenomenalism is a position on the mind–body problem which holds that physical and biochemical events within the human body (sense organs, neural impulses, and muscle contractions, for example) are the sole cause of mental events (thought, consciousness, and cognition). According to this view, subjective mental events are completely dependent for their existence on corresponding physical and biochemical events within the human body, yet themselves have no influence over physical events. The appearance that subjective mental states (such as intentions) influence physical events is merely an illusion. For instance, fear seems to make the heart beat faster, but according to epiphenomenalism the biochemical secretions of the brain and nervous system (such as adrenaline)—not the experience of fear—is what raises the heartbeat. Because mental events are a kind of overflow that cannot cause anything physical, yet have non-physical properties, epiphenomenalism is viewed as a form of property dualism.", "title": "" }, { "paragraph_id": 1, "text": "During the seventeenth century, René Descartes argued that animals are subject to mechanical laws of nature. He defended the idea of automatic behavior, or the performance of actions without conscious thought. Descartes questioned how the immaterial mind and the material body can interact causally. His interactionist model (1649) held that the body relates to the mind through the pineal gland. La Mettrie, Leibniz, and Spinoza all in their own way began this way of thinking. The idea that even if the animal were conscious nothing would be added to the production of behavior, even in animals of the human type, was first voiced by La Mettrie (1745), and then by Cabanis (1802), and was further explicated by Hodgson (1870) and Huxley (1874).", "title": "Development" }, { "paragraph_id": 2, "text": "Thomas Henry Huxley agreed with Descartes that behavior is determined solely by physical mechanisms, but he also believed that humans enjoy an intelligent life. In 1874, Huxley argued, in the Presidential Address to the British Association for the Advancement of Science, that animals are conscious automata. Huxley proposed that psychical changes are collateral products of physical changes. Like the bell of a clock that has no role in keeping the time, consciousness has no role in determining behavior.", "title": "Development" }, { "paragraph_id": 3, "text": "Huxley defended automatism by testing reflex actions, originally supported by Descartes. Huxley hypothesized that frogs that undergo lobotomy would swim when thrown into water, despite being unable to initiate actions. He argued that the ability to swim was solely dependent on the molecular change in the brain, concluding that consciousness is not necessary for reflex actions. According to epiphenomenalism, animals experience pain only as a result of neurophysiology.", "title": "Development" }, { "paragraph_id": 4, "text": "In 1870, Huxley conducted a case study on a French soldier who had sustained a shot in the Franco-Prussian War that fractured his left parietal bone. Every few weeks the soldier would enter a trance-like state, smoking, dressing himself, and aiming his cane like a rifle all while being insensitive to pins, electric shocks, odorous substances, vinegar, noise, and certain light conditions. Huxley used this study to show that consciousness was not necessary to execute these purposeful actions, justifying the assumption that humans are insensible machines. Huxley's mechanistic attitude towards the body convinced him that the brain alone causes behavior.", "title": "Development" }, { "paragraph_id": 5, "text": "In the early 1900s scientific behaviorists such as Ivan Pavlov, John B. Watson, and B. F. Skinner began the attempt to uncover laws describing the relationship between stimuli and responses, without reference to inner mental phenomena. Instead of adopting a form of eliminativism or mental fictionalism, positions that deny that inner mental phenomena exist, a behaviorist was able to adopt epiphenomenalism in order to allow for the existence of mind. George Santayana (1905) believed that all motion has merely physical causes. Because consciousness is accessory to life and not essential to it, natural selection is responsible for ingraining tendencies to avoid certain contingencies without any conscious achievement involved. By the 1960s, scientific behaviourism met substantial difficulties and eventually gave way to the cognitive revolution. Participants in that revolution, such as Jerry Fodor, reject epiphenomenalism and insist upon the efficacy of the mind. Fodor even speaks of \"epiphobia\"—fear that one is becoming an epiphenomenalist.", "title": "Development" }, { "paragraph_id": 6, "text": "However, since the cognitive revolution, there have been several who have argued for a version of epiphenomenalism. In 1970, Keith Campbell proposed his \"new epiphenomenalism\", which states that the body produces a spiritual mind that does not act on the body. How the brain causes a spiritual mind, according to Campbell, is destined to remain beyond our understanding forever (see New mysterianism). In 2001, David Chalmers and Frank Jackson argued that claims about conscious states should be deduced a priori from claims about physical states alone. They offered that epiphenomenalism bridges, but does not close, the explanatory gap between the physical and the phenomenal realms. These more recent versions maintain that only the subjective, qualitative aspects of mental states are epiphenomenal. Imagine both Pierre and a robot eating a cupcake. Unlike the robot, Pierre is conscious of eating the cupcake while the behavior is under way. This subjective experience is often called a quale (plural qualia), and it describes the private \"raw feel\" or the subjective \"what-it-is-like\" that is the inner accompaniment of many mental states. Thus, while Pierre and the robot are both doing the same thing, only Pierre has the inner conscious experience.", "title": "Development" }, { "paragraph_id": 7, "text": "Frank Jackson (1982), for example, once espoused the following view:", "title": "Development" }, { "paragraph_id": 8, "text": "I am what is sometimes known as a \"qualia freak\". I think that there are certain features of bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes. Tell me everything physical there is to tell about what is going on in a living brain... you won't have told me about the hurtfulness of pains, the itchiness of itches, pangs of jealousy....", "title": "Development" }, { "paragraph_id": 9, "text": "According to epiphenomenalism, mental states like Pierre's pleasurable experience—or, at any rate, their distinctive qualia—are epiphenomena; they are side-effects or by-products of physical processes in the body. If Pierre takes a second bite, it is not caused by his pleasure from the first; If Pierre says, \"That was good, so I will take another bite\", his speech act is not caused by the preceding pleasure. The conscious experiences that accompany brain processes are causally impotent. The mind might simply be a byproduct of other properties such as brain size or pathway activation synchronicity, which are adaptive.", "title": "Development" }, { "paragraph_id": 10, "text": "Some thinkers draw distinctions between different varieties of epiphenomenalism. In Consciousness Explained, Daniel Dennett distinguishes between a purely metaphysical sense of epiphenomenalism, in which the epiphenomenon has no causal impact at all, and Huxley's \"steam whistle\" epiphenomenalism, in which effects exist but are not functionally relevant.", "title": "Development" }, { "paragraph_id": 11, "text": "A large body of neurophysiological data seems to support epiphenomenalism . Some of the oldest such data is the Bereitschaftspotential or \"readiness potential\" in which electrical activity related to voluntary actions can be recorded up to two seconds before the subject is aware of making a decision to perform the action. More recently Benjamin Libet et al. (1979) have shown that it can take 0.5 seconds before a stimulus becomes part of conscious experience even though subjects can respond to the stimulus in reaction time tests within 200 milliseconds. The methods and conclusions of this experiment have received much criticism (e.g., see the many critical commentaries in Libet's (1985) target article), including fairly recently by neuroscientists such as Peter Tse, who claim to show that the readiness potential has nothing to do with consciousness at all. Recent research on the Event Related Potential also shows that conscious experience does not occur until the late phase of the potential (P3 or later) that occurs 300 milliseconds or more after the event. In Bregman's auditory continuity illusion, where a pure tone is followed by broadband noise and the noise is followed by the same pure tone it seems as if the tone occurs throughout the period of noise. This also suggests a delay for processing data before conscious experience occurs. Popular science author Tor Nørretranders has called the delay the \"user illusion\", implying that we only have the illusion of conscious control, most actions being controlled automatically by non-conscious parts of the brain with the conscious mind relegated to the role of spectator.", "title": "Arguments for" }, { "paragraph_id": 12, "text": "The scientific data seem to support the idea that conscious experience is created by non-conscious processes in the brain (i.e., there is subliminal processing that becomes conscious experience). These results have been interpreted to suggest that people are capable of action before conscious experience of the decision to act occurs. Some argue that this supports epiphenomenalism, since it shows that the feeling of making a decision to act is actually an epiphenomenon; the action happens before the decision, so the decision did not cause the action to occur.", "title": "Arguments for" }, { "paragraph_id": 13, "text": "The most powerful argument against epiphenomenalism is that it is self-contradictory: if we have knowledge about epiphenomenalism, then our brains know about the existence of the mind, but if epiphenomenalism were correct, then our brains should not have any knowledge about the mind, because the mind does not affect anything physical.", "title": "Arguments against" }, { "paragraph_id": 14, "text": "However, some philosophers do not accept this as a rigorous refutation. For example, Victor Argonov states that epiphenomenalism is a questionable, but experimentally falsifiable theory. He argues that the personal mind is not the only source of knowledge about the existence of mind in the world. A creature (even a philosophical zombie) could have knowledge about mind and the mind-body problem by virtue of some innate knowledge. The information about mind (and its problematic properties such as qualia and the hard problem of consciousness) could have been, in principle, implicitly \"written\" in the material world since its creation. Epiphenomenalists can say that God created an immaterial mind and a detailed \"program\" of material human behavior that makes it possible to speak about the mind–body problem. That version of epiphenomenalism seems highly exotic, but it cannot be excluded from consideration by pure theory. However, Argonov suggests that experiments could refute epiphenomenalism. In particular, epiphenomenalism could be refuted if neural correlates of consciousness can be found in the human brain, and it is proven that human speech about consciousness is caused by them.", "title": "Arguments against" }, { "paragraph_id": 15, "text": "Some philosophers, such as Dennett, reject both epiphenomenalism and the existence of qualia with the same charge that Gilbert Ryle leveled against a Cartesian \"ghost in the machine\", that they too are category mistakes. A quale or conscious experience would not belong to the category of objects of reference on this account, but rather to the category of ways of doing things.", "title": "Arguments against" }, { "paragraph_id": 16, "text": "Functionalists assert that mental states are well described by their overall role, their activity in relation to the organism as a whole. \"This doctrine is rooted in Aristotle's conception of the soul, and has antecedents in Hobbes's conception of the mind as a 'calculating machine', but it has become fully articulated (and popularly endorsed) only in the last third of the 20th century.\" In so far as it mediates stimulus and response, a mental function is analogous to a program that processes input/output in automata theory. In principle, multiple realisability would guarantee platform dependencies can be avoided, whether in terms of hardware and operating system or, ex hypothesi, biology and philosophy. Because a high-level language is a practical requirement for developing the most complex programs, functionalism implies that a non-reductive physicalism would offer a similar advantage over a strictly eliminative materialism.", "title": "Arguments against" }, { "paragraph_id": 17, "text": "Eliminative materialists believe \"folk psychology\" is so unscientific that, ultimately, it will be better to eliminate primitive concepts such as mind, desire and belief, in favor of a future neuro-scientific account. A more moderate position such as J. L. Mackie's error theory suggests that false beliefs should be stripped away from a mental concept without eliminating the concept itself, the legitimate core meaning being left intact.", "title": "Arguments against" }, { "paragraph_id": 18, "text": "Benjamin Libet's results are quoted in favor of epiphenomenalism, but he believes subjects still have a \"conscious veto\", since the readiness potential does not invariably lead to an action. In Freedom Evolves, Daniel Dennett argues that a no-free-will conclusion is based on dubious assumptions about the location of consciousness, as well as questioning the accuracy and interpretation of Libet's results. Similar criticism of Libet-style research has been made by neuroscientist Adina Roskies and cognitive theorists Tim Bayne and Alfred Mele.", "title": "Arguments against" }, { "paragraph_id": 19, "text": "Others have argued that data such as the Bereitschaftspotential undermine epiphenomenalism for the same reason, that such experiments rely on a subject reporting the point in time at which a conscious experience and a conscious decision occurs, thus relying on the subject to be able to consciously perform an action. That ability would seem to be at odds with early epiphenomenalism, which according to Huxley is the broad claim that consciousness is \"completely without any power… as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery\". Mind–body dualists reject epiphenomenalism on the same grounds.", "title": "Arguments against" }, { "paragraph_id": 20, "text": "Adrian G. Guggisberg and Annaïs Mottaz have also challenged those findings.", "title": "Arguments against" }, { "paragraph_id": 21, "text": "A study by Aaron Schurger and colleagues published in PNAS challenged assumptions about the causal nature of the readiness potential itself (and the \"pre-movement buildup\" of neural activity in general), thus denying the conclusions drawn from studies such as Libet's and Fried's.", "title": "Arguments against" }, { "paragraph_id": 22, "text": "In favor of interactionism, Celia Green (2003) argues that epiphenomenalism does not even provide a satisfactory solution to the problem of interaction posed by substance dualism. Although it does not entail substance dualism, according to Green, epiphenomenalism implies a one-way form of interactionism that is just as hard to conceive of as the two-way form embodied in substance dualism. Green suggests the assumption that it is less of a problem may arise from the unexamined belief that physical events have some sort of primacy over mental ones.", "title": "Arguments against" }, { "paragraph_id": 23, "text": "A number of scientists and philosophers, including William James, Karl Popper, John C. Eccles and Donald Symons, dismiss epiphenomenalism from an evolutionary perspective. They point out that the view that mind is an epiphenomenon of brain activity is not consistent with evolutionary theory, because if mind were functionless, it would have disappeared long ago, as it would not have been favoured by evolution.", "title": "Arguments against" } ]
Epiphenomenalism is a position on the mind–body problem which holds that physical and biochemical events within the human body are the sole cause of mental events. According to this view, subjective mental events are completely dependent for their existence on corresponding physical and biochemical events within the human body, yet themselves have no influence over physical events. The appearance that subjective mental states influence physical events is merely an illusion. For instance, fear seems to make the heart beat faster, but according to epiphenomenalism the biochemical secretions of the brain and nervous system—not the experience of fear—is what raises the heartbeat. Because mental events are a kind of overflow that cannot cause anything physical, yet have non-physical properties, epiphenomenalism is viewed as a form of property dualism.
2001-02-11T17:51:48Z
2023-12-30T19:51:59Z
[ "Template:Philosophy of mind", "Template:Blockquote", "Template:Cols", "Template:Wiktionary", "Template:Philosophy topics", "Template:Short description", "Template:Colend", "Template:Reflist", "Template:Cite encyclopedia", "Template:Cite book", "Template:Wikibooks", "Template:Citation needed", "Template:Cite journal", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Epiphenomenalism
9,498
Esperantujo
Esperantujo (IPA: [esperanˈtujo]) or Esperantio (IPA: [esperanˈtio]) is the community of speakers of the Esperanto language and their culture, as well as the places and institutions where the language is used. The term is used "as if it were a country." Although it does not occupy its own area of Earth's surface, it can be said to constitute the 120 countries which have their own national Esperanto association. The word is formed analogously to country names. In Esperanto, the names of countries were traditionally formed from the ethnic name of their inhabitants plus the suffix -ujo. For example, "France" was Francujo, from franco (a Frenchman). The term analogous to Francujo would be Esperantistujo (Esperantist-nation). However, that would convey the idea of the physical body of people, whereas using the name of the language as the basis of the word gives it the more abstract connotation of a cultural sphere. Currently, names of nation states are often formed with the suffix -io (traditionally reserved for deriving country names from geographic features — e.g. Francio instead of Francujo), and recently the form Esperantio has been used, among others, in the Pasporta Servo and the Esperanto Citizens' Community. In 1908, Dr. Wilhelm Molly attempted to create an Esperanto state in the Prussian-Belgian condominium of Neutral Moresnet, known as "Amikejo" (place of friendship). What became of it is unclear, and Neutral Moresnet was annexed to Belgium in the Treaty of Versailles, 1919. During the 1960s came a new effort of creating an Esperanto state, which this time was called Republic of Rose Island. The state island stood in the Adriatic Sea near Italy. In Europe on 2 June 2001 a number of organizations (they prefer to call themselves establishments) founded the Esperanta Civito, which "aims to be a subject of international law" and "aims to consolidate the relations between the Esperantists who feel themselves belonging to the diaspora language group which does not belong to any country". Esperanto Civito always uses the name Esperantujo (introduced by Hector Hodler in 1908), which itself is defined according to their interpretation of raumism, and the meaning, therefore, may differ from the traditional Esperanto understanding of the word Esperantujo. A language learning partner application called Amikumu has been launched in 2017, allowing Esperanto speakers to find each other. Esperantujo includes any physical place where Esperanto speakers meet, such as Esperanto gatherings or virtual networks. Sometimes it is said that it is everywhere where Esperanto speakers are connected. Although Esperantujo does not have its own official territory, a number of places around the world are owned by Esperanto organizations or are otherwise permanently connected to the Esperanto language and its community: Judging by the members of the World Esperanto Association, the countries with the most Esperanto speakers are (in descending order): Brazil, Germany, Japan, France, the United States, China, Italy. There is no governmental system in Esperantujo because it is not a true state. However, there is a social hierarchy of associations: Also there are thematic associations worldwide, which are concerned with spirituality, hobbies, science or bringing together Esperantists who share common interests. There is also a number of global organizations, such as Sennacieca Asocio Tutmonda (SAT), or the World Esperanto Youth Organization (TEJO), which has 46 national sections. Universal Esperanto Association is not a governmental system; however, the association represents Esperanto worldwide. In addition to the United Nations and UNESCO, the UEA has consultative relationships with UNICEF and the Council of Europe and general cooperative relations with the Organization of American States. UEA officially collaborates with the International Organization for Standardization (ISO) by means of an active connection to the ISO Committee on terminology (ISO/TC 37). The association is active for information on the European Union and other interstate and international organizations and conferences. UEA is a member of European Language Council, a joint forum of universities and linguistic associations to promote the knowledge of languages and cultures within and outside the European Union. Moreover, on 10 May 2011, the UEA and the International Information Center for Terminology (Infoterm) signed an Agreement on Cooperation, its objectives are inter exchange information, support each other and help out for projects, meetings, publications in the field of terminology and by which the UEA become Associate Member of Infoterm. In 2003 there was a European political movement called Europe–Democracy–Esperanto created. Within it is found a European federation that brings together local associations whose statutes depends on the countries. The working language of the movement is Esperanto. The goal is "to provide the European Union with the necessary tools to set up member rights democracy". The international language is a tool to enable cross-border political and social dialogue and actively contribute to peace and understanding between peoples. The original idea in the first ballot was mainly to spread the existence and the use of Esperanto to the general public. However, in France voices have grown steadily: 25067 (2004) 28944 (2009) and 33115 (2014). In this country there are a number of movements which support the issue: France Équité, Europe-Liberté, and Politicat. The flag of Esperanto is called Verda Flago (Green Flag). It consists of: The anthem is called "La Espero" since 1891: it is a poem written by L. L. Zamenhof. The song is usually sung at the triumphal march composed by Félicien Menu de Ménil in 1909. The Jubilee symbol represents the language internally, while the flag represents the Esperanto movement. It contains the Latin letter E (Esperanto) and the Cyrillic letter Э (Эсперанто) symbolizing the unification of West and East. The Jubilee symbol has been controversial, with some Esperantists derisively calling it "the melon." In addition, Ludwik Lejzer Zamenhof, the initiator of the language, is often used as a symbol. Sometimes he is even called "Uncle Zam", referring to the cartoon incarnation of American Uncle Sam. In addition to textbooks, including the Fundamento de Esperanto by Zamenhof, the Assimil-methods and the video-methods such as Muzzy in Gondoland of the BBC and Pasporto al la tuta mondo, there are many courses for learning online. Moreover, some universities teach Esperanto, and the Higher Foreign Language training (University Eötvös Loránd) delivers certificates in accordance with the Common European Framework of Reference for Languages (CEFR). More than 1600 people have such a certificate around the world: in 2014 around 470 at the level of B1, 510 at the level of B2 and 700 for C1. The International League of Esperanto Teachers (ILEI) is also working to publish learning materials for teachers. The University of Esperanto offers video lectures in Esperanto, for specialties like Confronting War, Informational Technologies and Astronomy. Courses are also held during the World Esperanto Congress in the framework of the Internacia Kongresa Universitato (IKU). After that, UEA uploads the related documents on its website. Science is an appropriate department for works in Esperanto. For example, the Conference on the Application of Esperanto in Science and Technology (KAEST) occurs in November every year since 1998 in the Czech Republic and Slovakia. Personal initiatives are also common: Doctor of mathematics Ulrich Matthias created a document about the foundations of Linear Algebra and the American group of Maine (USA) wrote a guidebook to learn the programming language Python. In general, Esperanto is used as a lingua franca in some websites aiming teaching of other languages, such as German, Slovak, Swahili, Wolof or Toki Pona. Since 1889 when La Esperantisto appeared, and soon other magazines in Esperanto throughout many countries in the world. Some of them are information media of Esperanto associations (Esperanto, Sennaciulo and Kontakto). Online Esperanto magazines like Libera Folio, launched in 2003, offer independent view of the Esperanto movement, aiming to soberly and critically shed light on current development. Most of the magazines deal with current events; one of such magazines is Monato, which is read in more than 60 countries. Its articles are written by correspondents from 40 countries, which know the local situation very well. Other most popular Esperanto newspapers are La Ondo de Esperanto, Beletra Almanako, Literatura Foiro, and Heroldo de Esperanto. Often national associations magazines are also published in order to inform about the movement in the country, such as Le Monde de l'espéranto of Espéranto-France. There are also scientific journals, such as Scienca Revuo of Internacia Scienca Asocio Esperantista (ISAE). Muzaiko is a radio that has broadcast an all-day international program of songs, interviews and current events in Esperanto since 2011. The latest two can be downloaded as podcasts. Besides Muzaiko, these other stations offer an hour of Esperanto-language broadcasting of various topics: Radio Libertaire, Polskie Radio, Vatican Radio, Varsovia Vento, Radio Verda and Kern.punkto. Spread of the Internet has enabled more efficient communication among Esperanto speakers and slightly replaced slower media such as mail. Many massively used websites such as Facebook or Google offer Esperanto interface. On 15 December 2009, on the occasion of the jubilee of 150th birthday of L. L. Zamenhof, Google additionally made visible the Esperanto flag as a part of their Google Doodles. Media as Twitter, Telegram, Reddit or Ipernity also contain a significant number of people in this community. In addition, content-providers such as WordPress and YouTube also enable bloggers write in Esperanto. Esperanto versions of programs such as the office suite LibreOffice and Mozilla Firefox browser, or the educational program about programming Scratch are also available. Additionally, online games like Minecraft offer complete Esperanto interface. Monero, an anonymous cryptocurrency, was named after the Esperanto word for "coin" and its official wallet is available in Esperanto. The same applies to Monerujo ("Monero container"), the only open-source wallet for Android. Although Esperantujo is not a country, there is an Esperanto football team (eo, es), which has existed since 2014 and participates in matches during World Esperanto Congresses. The team is part of the N.F.-Board and not of FIFA, and have played against the teams of Armenian-originating Argentine Community in 2014 and the team from Western Sahara in 2015. Initially, Esperanto speakers learned the language as it was described by L. L. Zamenhof. In 1905, the Fundamento de Esperanto put together the first Esperanto textbook, an exercise book and a universal dictionary. The "Declaration about the essence of Esperantism" (1905) defines an "Esperantist" to be anyone who speaks and uses Esperanto. "Esperantism" was defined to be a movement to promote the widespread use of Esperanto as a supplement to mother tongues in international and inter-ethnic contexts. As the word "esperantist" is linked with this "esperantism" (the Esperanto movement) and as -ists and -isms are linked with ideologies, today many people who speak Esperanto prefer to be called "Esperanto speaker". The monthly magazine La Ondo de Esperanto every year since 1998 proclaims an 'Esperantist of the year', who remarkably contributed to the spreading of the language during the year. Publishing and selling books, the so-called book services, is the main market and is often the first expenditure of many Esperanto associations. Some companies are already well known: for example Vinilkosmo, which publishes and makes popular Esperanto music since 1990. Then there are initiatives such as the job-seeking website Eklaboru, created by Chuck Smith, for job offers and candidates within Esperanto associations or Esperanto meetings. In 1907, René de Saussure proposed the spesmilo ⟨₷⟩ as an international currency. It had some use before the First World War. In 1942 a currency called the stelo ("star"; plural, steloj) was created. It was used at meetings of the Universala Ligo and in Esperanto environments such as the annual Universal Congress. Over the years it slowly became unusable and at the official closing of the Universala Ligo in the 1990s, the remaining steloj coins were handed over to the UEA. You can buy them at the UEA's book service as souvenirs. The current steloj are made of plastic, they are used in a number of meetings, especially among young people. The currency is maintained by Stelaro, which calculates the rates, keeps the stock, and opened branches in various e-meetings. Currently, there are stelo-coins of 1 ★, 3 ★ and 10 ★. Quotes of Stars at 31 December 2014 were [25] 1 EUR = 4.189 ★. There exist Zamenhof-Esperanto objects (ZEOs), scattered in numerous countries around the world, which are the things named in honor of L. L. Zamenhof or Esperanto: monuments, street names, places and so on. There also exists a UEA-committee for ZEOs. In addition, in several countries there are also sites dedicated to Esperanto: meetup places, workshops, seminars, festivals, Esperanto houses. These places provide attractions for Esperantists. Here are two: the Castle of Grésilion in France and the Department of Planned Languages and Esperanto Museum in Vienna (Austria). Esperanto literary heritage is the richest and the most diverse of any constructed language. There are over 25,000 Esperanto books (originals and translations) as well as over a hundred regularly distributed Esperanto magazines. There are also a number of movies which have been published in Esperanto. Moreover, Esperanto itself was used in numerous movies. Many public holidays recognized by Esperanto speakers are celebrated internationally, having gained full acceptance by organizations such as UN and UNESCO, and are also publicly observed in select countries that are UN members. This is largely a byproduct of the influence the Esperanto community once had on organizations that worked in the field of international relations (including the United Nations) in the mid-20th century. Here are the celebrations proposed as international holidays by the UEA since 2010: Every year numerous meetings of Esperanto speakers in different topics around the world take place. They mobilize Esperanto-speakers which share the same will about a specific topic. The main example is the Universal Congress of Esperanto (UK), which annually organizes the UEA every summer for a week. Other events: Next to these globally comprising meetings there are also local events such as New Year's Gathering (NR) or Esperanto Youth Week (JES), which occur during the last days of December and first days of January. These meetings seem to have been successful during the last 20 years. Due to the fact that there are a lot of Esperanto meetings around the globe, there are websites which aim to list and share them. Eventa Servo provides an up-to-date list of online meetings and in-person events happening each week. Eventoj.hu describes events with a list and dates, and contains an archive until 1996.
[ { "paragraph_id": 0, "text": "Esperantujo (IPA: [esperanˈtujo]) or Esperantio (IPA: [esperanˈtio]) is the community of speakers of the Esperanto language and their culture, as well as the places and institutions where the language is used. The term is used \"as if it were a country.\"", "title": "" }, { "paragraph_id": 1, "text": "Although it does not occupy its own area of Earth's surface, it can be said to constitute the 120 countries which have their own national Esperanto association.", "title": "" }, { "paragraph_id": 2, "text": "The word is formed analogously to country names. In Esperanto, the names of countries were traditionally formed from the ethnic name of their inhabitants plus the suffix -ujo. For example, \"France\" was Francujo, from franco (a Frenchman).", "title": "Etymology and terminology" }, { "paragraph_id": 3, "text": "The term analogous to Francujo would be Esperantistujo (Esperantist-nation). However, that would convey the idea of the physical body of people, whereas using the name of the language as the basis of the word gives it the more abstract connotation of a cultural sphere.", "title": "Etymology and terminology" }, { "paragraph_id": 4, "text": "Currently, names of nation states are often formed with the suffix -io (traditionally reserved for deriving country names from geographic features — e.g. Francio instead of Francujo), and recently the form Esperantio has been used, among others, in the Pasporta Servo and the Esperanto Citizens' Community.", "title": "Etymology and terminology" }, { "paragraph_id": 5, "text": "In 1908, Dr. Wilhelm Molly attempted to create an Esperanto state in the Prussian-Belgian condominium of Neutral Moresnet, known as \"Amikejo\" (place of friendship). What became of it is unclear, and Neutral Moresnet was annexed to Belgium in the Treaty of Versailles, 1919.", "title": "History" }, { "paragraph_id": 6, "text": "During the 1960s came a new effort of creating an Esperanto state, which this time was called Republic of Rose Island. The state island stood in the Adriatic Sea near Italy.", "title": "History" }, { "paragraph_id": 7, "text": "In Europe on 2 June 2001 a number of organizations (they prefer to call themselves establishments) founded the Esperanta Civito, which \"aims to be a subject of international law\" and \"aims to consolidate the relations between the Esperantists who feel themselves belonging to the diaspora language group which does not belong to any country\". Esperanto Civito always uses the name Esperantujo (introduced by Hector Hodler in 1908), which itself is defined according to their interpretation of raumism, and the meaning, therefore, may differ from the traditional Esperanto understanding of the word Esperantujo.", "title": "History" }, { "paragraph_id": 8, "text": "A language learning partner application called Amikumu has been launched in 2017, allowing Esperanto speakers to find each other.", "title": "History" }, { "paragraph_id": 9, "text": "Esperantujo includes any physical place where Esperanto speakers meet, such as Esperanto gatherings or virtual networks. Sometimes it is said that it is everywhere where Esperanto speakers are connected.", "title": "Geography" }, { "paragraph_id": 10, "text": "Although Esperantujo does not have its own official territory, a number of places around the world are owned by Esperanto organizations or are otherwise permanently connected to the Esperanto language and its community:", "title": "Geography" }, { "paragraph_id": 11, "text": "Judging by the members of the World Esperanto Association, the countries with the most Esperanto speakers are (in descending order): Brazil, Germany, Japan, France, the United States, China, Italy.", "title": "Geography" }, { "paragraph_id": 12, "text": "There is no governmental system in Esperantujo because it is not a true state. However, there is a social hierarchy of associations:", "title": "Politics" }, { "paragraph_id": 13, "text": "Also there are thematic associations worldwide, which are concerned with spirituality, hobbies, science or bringing together Esperantists who share common interests.", "title": "Politics" }, { "paragraph_id": 14, "text": "There is also a number of global organizations, such as Sennacieca Asocio Tutmonda (SAT), or the World Esperanto Youth Organization (TEJO), which has 46 national sections.", "title": "Politics" }, { "paragraph_id": 15, "text": "Universal Esperanto Association is not a governmental system; however, the association represents Esperanto worldwide. In addition to the United Nations and UNESCO, the UEA has consultative relationships with UNICEF and the Council of Europe and general cooperative relations with the Organization of American States. UEA officially collaborates with the International Organization for Standardization (ISO) by means of an active connection to the ISO Committee on terminology (ISO/TC 37). The association is active for information on the European Union and other interstate and international organizations and conferences. UEA is a member of European Language Council, a joint forum of universities and linguistic associations to promote the knowledge of languages and cultures within and outside the European Union. Moreover, on 10 May 2011, the UEA and the International Information Center for Terminology (Infoterm) signed an Agreement on Cooperation, its objectives are inter exchange information, support each other and help out for projects, meetings, publications in the field of terminology and by which the UEA become Associate Member of Infoterm.", "title": "Politics" }, { "paragraph_id": 16, "text": "In 2003 there was a European political movement called Europe–Democracy–Esperanto created. Within it is found a European federation that brings together local associations whose statutes depends on the countries. The working language of the movement is Esperanto. The goal is \"to provide the European Union with the necessary tools to set up member rights democracy\". The international language is a tool to enable cross-border political and social dialogue and actively contribute to peace and understanding between peoples. The original idea in the first ballot was mainly to spread the existence and the use of Esperanto to the general public. However, in France voices have grown steadily: 25067 (2004) 28944 (2009) and 33115 (2014). In this country there are a number of movements which support the issue: France Équité, Europe-Liberté, and Politicat.", "title": "Politics" }, { "paragraph_id": 17, "text": "The flag of Esperanto is called Verda Flago (Green Flag). It consists of:", "title": "Politics" }, { "paragraph_id": 18, "text": "The anthem is called \"La Espero\" since 1891: it is a poem written by L. L. Zamenhof. The song is usually sung at the triumphal march composed by Félicien Menu de Ménil in 1909.", "title": "Politics" }, { "paragraph_id": 19, "text": "The Jubilee symbol represents the language internally, while the flag represents the Esperanto movement. It contains the Latin letter E (Esperanto) and the Cyrillic letter Э (Эсперанто) symbolizing the unification of West and East. The Jubilee symbol has been controversial, with some Esperantists derisively calling it \"the melon.\"", "title": "Politics" }, { "paragraph_id": 20, "text": "In addition, Ludwik Lejzer Zamenhof, the initiator of the language, is often used as a symbol. Sometimes he is even called \"Uncle Zam\", referring to the cartoon incarnation of American Uncle Sam.", "title": "Politics" }, { "paragraph_id": 21, "text": "In addition to textbooks, including the Fundamento de Esperanto by Zamenhof, the Assimil-methods and the video-methods such as Muzzy in Gondoland of the BBC and Pasporto al la tuta mondo, there are many courses for learning online. Moreover, some universities teach Esperanto, and the Higher Foreign Language training (University Eötvös Loránd) delivers certificates in accordance with the Common European Framework of Reference for Languages (CEFR). More than 1600 people have such a certificate around the world: in 2014 around 470 at the level of B1, 510 at the level of B2 and 700 for C1. The International League of Esperanto Teachers (ILEI) is also working to publish learning materials for teachers.", "title": "Population" }, { "paragraph_id": 22, "text": "The University of Esperanto offers video lectures in Esperanto, for specialties like Confronting War, Informational Technologies and Astronomy. Courses are also held during the World Esperanto Congress in the framework of the Internacia Kongresa Universitato (IKU). After that, UEA uploads the related documents on its website.", "title": "Population" }, { "paragraph_id": 23, "text": "Science is an appropriate department for works in Esperanto. For example, the Conference on the Application of Esperanto in Science and Technology (KAEST) occurs in November every year since 1998 in the Czech Republic and Slovakia. Personal initiatives are also common: Doctor of mathematics Ulrich Matthias created a document about the foundations of Linear Algebra and the American group of Maine (USA) wrote a guidebook to learn the programming language Python.", "title": "Population" }, { "paragraph_id": 24, "text": "In general, Esperanto is used as a lingua franca in some websites aiming teaching of other languages, such as German, Slovak, Swahili, Wolof or Toki Pona.", "title": "Population" }, { "paragraph_id": 25, "text": "Since 1889 when La Esperantisto appeared, and soon other magazines in Esperanto throughout many countries in the world. Some of them are information media of Esperanto associations (Esperanto, Sennaciulo and Kontakto). Online Esperanto magazines like Libera Folio, launched in 2003, offer independent view of the Esperanto movement, aiming to soberly and critically shed light on current development. Most of the magazines deal with current events; one of such magazines is Monato, which is read in more than 60 countries. Its articles are written by correspondents from 40 countries, which know the local situation very well. Other most popular Esperanto newspapers are La Ondo de Esperanto, Beletra Almanako, Literatura Foiro, and Heroldo de Esperanto. Often national associations magazines are also published in order to inform about the movement in the country, such as Le Monde de l'espéranto of Espéranto-France. There are also scientific journals, such as Scienca Revuo of Internacia Scienca Asocio Esperantista (ISAE).", "title": "Population" }, { "paragraph_id": 26, "text": "Muzaiko is a radio that has broadcast an all-day international program of songs, interviews and current events in Esperanto since 2011. The latest two can be downloaded as podcasts. Besides Muzaiko, these other stations offer an hour of Esperanto-language broadcasting of various topics: Radio Libertaire, Polskie Radio, Vatican Radio, Varsovia Vento, Radio Verda and Kern.punkto.", "title": "Population" }, { "paragraph_id": 27, "text": "Spread of the Internet has enabled more efficient communication among Esperanto speakers and slightly replaced slower media such as mail. Many massively used websites such as Facebook or Google offer Esperanto interface. On 15 December 2009, on the occasion of the jubilee of 150th birthday of L. L. Zamenhof, Google additionally made visible the Esperanto flag as a part of their Google Doodles. Media as Twitter, Telegram, Reddit or Ipernity also contain a significant number of people in this community. In addition, content-providers such as WordPress and YouTube also enable bloggers write in Esperanto. Esperanto versions of programs such as the office suite LibreOffice and Mozilla Firefox browser, or the educational program about programming Scratch are also available. Additionally, online games like Minecraft offer complete Esperanto interface.", "title": "Population" }, { "paragraph_id": 28, "text": "Monero, an anonymous cryptocurrency, was named after the Esperanto word for \"coin\" and its official wallet is available in Esperanto. The same applies to Monerujo (\"Monero container\"), the only open-source wallet for Android.", "title": "Population" }, { "paragraph_id": 29, "text": "Although Esperantujo is not a country, there is an Esperanto football team (eo, es), which has existed since 2014 and participates in matches during World Esperanto Congresses. The team is part of the N.F.-Board and not of FIFA, and have played against the teams of Armenian-originating Argentine Community in 2014 and the team from Western Sahara in 2015.", "title": "Population" }, { "paragraph_id": 30, "text": "Initially, Esperanto speakers learned the language as it was described by L. L. Zamenhof. In 1905, the Fundamento de Esperanto put together the first Esperanto textbook, an exercise book and a universal dictionary.", "title": "Population" }, { "paragraph_id": 31, "text": "The \"Declaration about the essence of Esperantism\" (1905) defines an \"Esperantist\" to be anyone who speaks and uses Esperanto. \"Esperantism\" was defined to be a movement to promote the widespread use of Esperanto as a supplement to mother tongues in international and inter-ethnic contexts. As the word \"esperantist\" is linked with this \"esperantism\" (the Esperanto movement) and as -ists and -isms are linked with ideologies, today many people who speak Esperanto prefer to be called \"Esperanto speaker\".", "title": "Population" }, { "paragraph_id": 32, "text": "The monthly magazine La Ondo de Esperanto every year since 1998 proclaims an 'Esperantist of the year', who remarkably contributed to the spreading of the language during the year.", "title": "Population" }, { "paragraph_id": 33, "text": "Publishing and selling books, the so-called book services, is the main market and is often the first expenditure of many Esperanto associations. Some companies are already well known: for example Vinilkosmo, which publishes and makes popular Esperanto music since 1990. Then there are initiatives such as the job-seeking website Eklaboru, created by Chuck Smith, for job offers and candidates within Esperanto associations or Esperanto meetings.", "title": "Economy" }, { "paragraph_id": 34, "text": "In 1907, René de Saussure proposed the spesmilo ⟨₷⟩ as an international currency. It had some use before the First World War.", "title": "Economy" }, { "paragraph_id": 35, "text": "In 1942 a currency called the stelo (\"star\"; plural, steloj) was created. It was used at meetings of the Universala Ligo and in Esperanto environments such as the annual Universal Congress. Over the years it slowly became unusable and at the official closing of the Universala Ligo in the 1990s, the remaining steloj coins were handed over to the UEA. You can buy them at the UEA's book service as souvenirs.", "title": "Economy" }, { "paragraph_id": 36, "text": "The current steloj are made of plastic, they are used in a number of meetings, especially among young people. The currency is maintained by Stelaro, which calculates the rates, keeps the stock, and opened branches in various e-meetings. Currently, there are stelo-coins of 1 ★, 3 ★ and 10 ★. Quotes of Stars at 31 December 2014 were [25] 1 EUR = 4.189 ★.", "title": "Economy" }, { "paragraph_id": 37, "text": "There exist Zamenhof-Esperanto objects (ZEOs), scattered in numerous countries around the world, which are the things named in honor of L. L. Zamenhof or Esperanto: monuments, street names, places and so on. There also exists a UEA-committee for ZEOs.", "title": "Culture" }, { "paragraph_id": 38, "text": "In addition, in several countries there are also sites dedicated to Esperanto: meetup places, workshops, seminars, festivals, Esperanto houses. These places provide attractions for Esperantists. Here are two: the Castle of Grésilion in France and the Department of Planned Languages and Esperanto Museum in Vienna (Austria).", "title": "Culture" }, { "paragraph_id": 39, "text": "Esperanto literary heritage is the richest and the most diverse of any constructed language. There are over 25,000 Esperanto books (originals and translations) as well as over a hundred regularly distributed Esperanto magazines.", "title": "Culture" }, { "paragraph_id": 40, "text": "There are also a number of movies which have been published in Esperanto. Moreover, Esperanto itself was used in numerous movies.", "title": "Culture" }, { "paragraph_id": 41, "text": "Many public holidays recognized by Esperanto speakers are celebrated internationally, having gained full acceptance by organizations such as UN and UNESCO, and are also publicly observed in select countries that are UN members. This is largely a byproduct of the influence the Esperanto community once had on organizations that worked in the field of international relations (including the United Nations) in the mid-20th century. Here are the celebrations proposed as international holidays by the UEA since 2010:", "title": "Culture" }, { "paragraph_id": 42, "text": "Every year numerous meetings of Esperanto speakers in different topics around the world take place. They mobilize Esperanto-speakers which share the same will about a specific topic. The main example is the Universal Congress of Esperanto (UK), which annually organizes the UEA every summer for a week. Other events:", "title": "Culture" }, { "paragraph_id": 43, "text": "Next to these globally comprising meetings there are also local events such as New Year's Gathering (NR) or Esperanto Youth Week (JES), which occur during the last days of December and first days of January. These meetings seem to have been successful during the last 20 years.", "title": "Culture" }, { "paragraph_id": 44, "text": "Due to the fact that there are a lot of Esperanto meetings around the globe, there are websites which aim to list and share them. Eventa Servo provides an up-to-date list of online meetings and in-person events happening each week. Eventoj.hu describes events with a list and dates, and contains an archive until 1996.", "title": "Culture" } ]
Esperantujo or Esperantio is the community of speakers of the Esperanto language and their culture, as well as the places and institutions where the language is used. The term is used "as if it were a country." Although it does not occupy its own area of Earth's surface, it can be said to constitute the 120 countries which have their own national Esperanto association.
2001-11-03T00:16:41Z
2023-11-26T01:37:10Z
[ "Template:Main article", "Template:More citations needed", "Template:Further", "Template:Cite magazine", "Template:Cite book", "Template:Lang-de", "Template:Color", "Template:Cite web", "Template:Lang", "Template:See also", "Template:Dubious", "Template:Reflist", "Template:Infobox settlement", "Template:Esperanto sidebar", "Template:IPA", "Template:Citation needed", "Template:Lang-eo", "Template:ISSN", "Template:In lang" ]
https://en.wikipedia.org/wiki/Esperantujo
9,499
Ethernet
Ethernet (/ˈiːθərnɛt/ EE-thər-net) is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3. Ethernet has since been refined to support higher bit rates, a greater number of nodes, and longer link distances, but retains much backward compatibility. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET. The original 10BASE5 Ethernet uses a thick coaxial cable as a shared medium. This was largely superseded by 10BASE2, which used a thinner and more flexible cable that was both cheaper and easier to use. More modern Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 Mbit/s to the latest 400 Gbit/s, with rates up to 1.6 Tbit/s under development. The Ethernet standards include several wiring and signaling variants of the OSI physical layer. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses, and error-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols trigger retransmission of lost frames. Per the OSI model, Ethernet provides services up to and including the data link layer. The 48-bit MAC address was adopted by other IEEE 802 networking standards, including IEEE 802.11 (Wi-Fi), as well as by FDDI. EtherType values are also used in Subnetwork Access Protocol (SNAP) headers. Ethernet is widely used in homes and industry, and interworks well with wireless Wi-Fi technologies. The Internet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974 as a means to allow Alto computers to communicate with each other. It was inspired by ALOHAnet, which Robert Metcalfe had studied as part of his PhD dissertation and was originally called the Alto Aloha Network. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, and Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper. Yogen Dalal, Ron Crane, Bob Garner, and Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was released to the market in 1980. Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation (DEC), Intel, and Xerox to work together to promote Ethernet as a standard. As part of that process Xerox agreed to relinquish their 'Ethernet' trademark. The first standard was published on September 30, 1980, as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications". This so-called DIX standard (Digital Intel Xerox) specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet initially competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market needs and with 10BASE2, shift to inexpensive thin coaxial cable and from 1990, to the now-ubiquitous twisted pair with 10BASE-T. By the end of the 1980s, Ethernet was clearly the dominant network technology. In the process, 3Com became a major company. 3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, and that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed quickly by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. In the 1980s, IBM's own PC Network product competed with Ethernet for the PC, and through the 1980s, LAN hardware, in general, was not common on PCs. However, in the mid to late 1980s, PC networking did become popular in offices and schools for printer and fileserver sharing, and among the many diverse competing LAN technologies of that decade, Ethernet was one of the most popular. Parallel port based Ethernet adapters were produced for a time, with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that Ethernet ports began to appear on some PCs and most workstations. This process was greatly sped up with the introduction of 10BASE-T and its relatively small modular connector, at which point Ethernet ports appeared even on low-end motherboards. Since then, Ethernet technology has evolved to meet new bandwidth and market requirements. In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is quickly replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). The "DIX-group" with Gary Robinson (DEC), Phil Arst (Intel), and Bob Printis (Xerox) submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward supported by General Motors) were also considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, and standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products. With such business implications in mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office communication market, including Siemens' support for the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens' representative to IEEE 802, quickly achieved broader support for Ethernet beyond IEEE by the establishment of a competing Task Group "Local Networks" within the European standards body ECMA TC24. In March 1982, ECMA TC24 with its corporate members reached an agreement on a standard for CSMA/CD based on the IEEE 802 draft. Because the DIX proposal was most technically complete and because of the speedy action taken by ECMA which decisively contributed to the conciliation of opinions within IEEE, the IEEE 802.3 CSMA/CD standard was approved in December 1982. IEEE published the 802.3 standard as a draft in 1983 and as a standard in 1985. Approval of Ethernet on the international level was achieved by a similar, cross-partisan action with Fromm as the liaison officer working to integrate with International Electrotechnical Commission (IEC) Technical Committee 83 and International Organization for Standardization (ISO) Technical Committee 97 Sub Committee 6. The ISO 8802-3 standard was published in 1989. Ethernet has evolved to include higher bandwidth, improved medium access control methods, and different physical media. The multidrop coaxial cable was replaced with physical point-to-point links connected by Ethernet repeaters or switches. Ethernet stations communicate by sending each other data packets: blocks of data individually sent and delivered. As with other IEEE 802 LANs, adapters come programmed with globally unique 48-bit MAC address so that each Ethernet station has a unique address. The MAC addresses are used to specify both the destination and the source of each data packet. Ethernet establishes link-level connections, which can be defined using both the destination and source addresses. On reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Ethernet stations. An EtherType field in each frame is used by the operating system on the receiving station to select the appropriate protocol module (e.g., an Internet Protocol version such as IPv4). Ethernet frames are said to be self-identifying, because of the EtherType field. Self-identifying frames make it possible to intermix multiple protocols on the same physical network and allow a single computer to use multiple protocols together. Despite the evolution of Ethernet technology, all generations of Ethernet (excluding early experimental versions) use the same frame formats. Mixed-speed networks can be built using Ethernet switches and repeaters supporting the desired Ethernet variants. Due to the ubiquity of Ethernet, and the ever-decreasing cost of the hardware needed to support it, by 2004 most manufacturers built Ethernet interfaces directly into PC motherboards, eliminating the need for a separate network card. Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The method used was similar to those used in radio systems, with the common cable providing the communication channel likened to the Luminiferous aether in 19th-century physics, and it was from this reference that the name "Ethernet" was derived. Original Ethernet's shared coaxial cable (the shared medium) traversed a building or campus to every attached machine. A scheme known as carrier-sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than competing Token Ring or Token Bus technologies. Computers are connected to an Attachment Unit Interface (AUI) transceiver, which is in turn connected to the cable (with thin Ethernet the transceiver is usually integrated into the network adapter). While a simple passive wire is highly reliable for small networks, it is not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, can make the whole Ethernet segment unusable. Through the first half of the 1980s, Ethernet's 10BASE5 implementation used a coaxial cable 0.375 inches (9.5 mm) in diameter, later called thick Ethernet or thicknet. Its successor, 10BASE2, called thin Ethernet or thinnet, used the RG-58 coaxial cable. The emphasis was on making installation of the cable easier and less costly. Since all communication happens on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it. Use of a single cable also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are simultaneously active. A collision happens when two stations attempt to transmit at the same time. They corrupt transmitted data and require stations to re-transmit. The lost data and re-transmission reduces throughput. In the worst case, where multiple active hosts connected with maximum allowed cable length attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 studied performance of an existing Ethernet installation under both normal and artificially generated heavy load. The report claimed that 98% throughput on the LAN was observed. This is in contrast with token passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits. This report was controversial, as modeling showed that collision-based networks theoretically became unstable under loads as low as 37% of nominal capacity. Many early researchers failed to understand these results. Performance on real networks is significantly better. In a modern Ethernet, the stations do not all share one channel through a shared cable or a simple repeater hub; instead, each station communicates with a switch, which in turn forwards that traffic to the destination station. In this topology, collisions are only possible if station and switch attempt to communicate with each other at the same time, and collisions are limited to this link. Furthermore, the 10BASE-T standard introduced a full duplex mode of operation which became common with Fast Ethernet and the de facto standard with Gigabit Ethernet. In full duplex, switch and station can send and receive simultaneously, and therefore modern Ethernets are completely collision-free. For signal degradation and timing reasons, coaxial Ethernet segments have a restricted size. Somewhat larger networks can be built by using an Ethernet repeater. Early repeaters had only two ports, allowing, at most, a doubling of network size. Once repeaters with more than two ports became available, it was possible to wire the network in a star topology. Early experiments with star topologies (called Fibernet) using optical fiber were published by 1978. Shared cable Ethernet is always hard to install in offices because its bus topology is in conflict with the star topology cable plans designed into buildings for telephony. Modifying Ethernet to conform to twisted pair telephone wiring already installed in commercial buildings provided another opportunity to lower costs, expand the installed base, and leverage building design, and, thus, twisted-pair Ethernet was the next logical development in the mid-1980s. Ethernet on unshielded twisted-pair cables (UTP) began with StarLAN at 1 Mbit/s in the mid-1980s. In 1987 SynOptics introduced the first twisted-pair Ethernet at 10 Mbit/s in a star-wired cabling topology with a central hub, later called LattisNet. These evolved into 10BASE-T, which was designed for point-to-point links only, and all termination was built into the device. This changed repeaters from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks easier to maintain by preventing most faults with one peer or its associated cable from affecting other devices on the network. Despite the physical star topology and the presence of separate transmit and receive channels in the twisted pair and fiber media, repeater-based Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the repeater, primarily generation of the jam signal in dealing with packet collisions. Every packet is sent to every other port on the repeater, so bandwidth and security problems are not addressed. The total throughput of the repeater is limited to that of a single link, and all links must operate at the same speed. While repeaters can isolate some aspects of Ethernet segments, such as cable breakages, they still forward all traffic to all Ethernet devices. The entire network is one collision domain, and all hosts have to be able to detect collisions anywhere on the network. This limits the number of repeaters between the farthest nodes and creates practical limits on how many machines can communicate on an Ethernet network. Segments joined by repeaters have to all operate at the same speed, making phased-in upgrades impossible. To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed Ethernet packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. At initial startup, Ethernet bridges work somewhat like Ethernet repeaters, passing all traffic between segments. By observing the source addresses of incoming frames, the bridge then builds an address table associating addresses to segments. Once an address is learned, the bridge forwards network traffic destined for that address only to the associated segment, improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcome the limits on total segments between two hosts and allow the mixing of speeds, both of which are critical to the incremental deployment of faster Ethernet variants. In 1989, Motorola Codex introduced their 6310 EtherSpan, and Kalpana introduced their EtherSwitch; these were examples of the first commercial Ethernet switches. Early switches such as this used cut-through switching where only the header of the incoming packet is examined before it is either dropped or forwarded to another segment. This reduces the forwarding latency. One drawback of this method is that it does not readily allow a mixture of different link speeds. Another is that packets that have been corrupted are still propagated through the network. The eventual remedy for this was a return to the original store and forward approach of bridging, where the packet is read into a buffer on the switch in its entirety, its frame check sequence verified and only then the packet is forwarded. In modern network equipment, this process is typically done using application-specific integrated circuits allowing packets to be forwarded at wire speed. When a twisted pair or fiber link segment is used and neither end is connected to a repeater, full-duplex Ethernet becomes possible over that segment. In full-duplex mode, both devices can transmit and receive to and from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (for example, 200 Mbit/s for Fast Ethernet). The elimination of the collision domain for these connections also means that all the link's bandwidth can be used by the two devices on that segment and that segment length is not limited by the constraints of collision detection. Since packets are typically delivered only to the port they are intended for, traffic on a switched Ethernet is less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the improved isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology. Simple switched Ethernet networks, while a great improvement over repeater-based Ethernet, suffer from single points of failure, attacks that trick switches or hosts into sending data to a machine even if it is not intended for it, scalability and security issues with regard to switching loops, broadcast radiation, and multicast traffic. Advanced networking features in switches use Shortest Path Bridging (SPB) or the Spanning Tree Protocol (STP) to maintain a loop-free, meshed network, allowing physical loops for redundancy (STP) or load-balancing (SPB). Shortest Path Bridging includes the use of the link-state routing protocol IS-IS to allow larger networks with shortest path routes between devices. Advanced networking features also ensure port security, provide protection features such as MAC lockdown and broadcast radiation filtering, use VLANs to keep different classes of users separate while using the same physical infrastructure, employ multilayer switching to route between different classes, and use link aggregation to add bandwidth to overloaded links and to provide some redundancy. In 2016, Ethernet replaced InfiniBand as the most popular system interconnect of TOP500 supercomputers. The Ethernet physical layer evolved over a considerable time span and encompasses coaxial, twisted pair and fiber-optic physical media interfaces, with speeds from 1 Mbit/s to 400 Gbit/s. The first introduction of twisted-pair CSMA/CD was StarLAN, standardized as 802.3 1BASE5. While 1BASE5 had little market penetration, it defined the physical apparatus (wire, plug/jack, pin-out, and wiring plan) that would be carried over to 10BASE-T through 10GBASE-T. The most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three use twisted-pair cables and 8P8C modular connectors. They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. Fiber optic variants of Ethernet (that commonly use SFP modules) are also very popular in larger networks, offering high performance, better electrical isolation and longer distance (tens of kilometers with some versions). In general, network protocol stack software will work similarly on all varieties. In IEEE 802.3, a datagram is called a packet or frame. Packet is used to describe the overall transmission unit and includes the preamble, start frame delimiter (SFD) and carrier extension (if present). The frame begins after the start frame delimiter with a frame header featuring source and destination MAC addresses and the EtherType field giving either the protocol type for the payload protocol or the length of the payload. The middle section of the frame consists of payload data including any headers for other protocols (for example, Internet Protocol) carried in the frame. The frame ends with a 32-bit cyclic redundancy check, which is used to detect corruption of data in transit. Notably, Ethernet packets have no time-to-live field, leading to possible problems in the presence of a switching loop. Autonegotiation is the procedure by which two connected devices choose common transmission parameters, e.g. speed and duplex mode. Autonegotiation was initially an optional feature, first introduced with 100BASE-TX, while it is also backward compatible with 10BASE-T. Autonegotiation is mandatory for 1000BASE-T and faster. A switching loop or bridge loop occurs in computer networks when there is more than one Layer 2 (OSI model) path between two endpoints (e.g. multiple connections between two network switches or two ports on the same switch connected to each other). The loop creates broadcast storms as broadcasts and multicasts are forwarded by switches out every port, the switch or switches will repeatedly rebroadcast the broadcast messages flooding the network. Since the Layer 2 header does not support a time to live (TTL) value, if a frame is sent into a looped topology, it can loop forever. A physical topology that contains switching or bridge loops is attractive for redundancy reasons, yet a switched network must not have loops. The solution is to allow physical loops, but create a loop-free logical topology using the SPB protocol or the older STP on the network switches. A node that is sending longer than the maximum transmission window for an Ethernet packet is considered to be jabbering. Depending on the physical topology, jabber detection and remedy differ somewhat.
[ { "paragraph_id": 0, "text": "Ethernet (/ˈiːθərnɛt/ EE-thər-net) is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3. Ethernet has since been refined to support higher bit rates, a greater number of nodes, and longer link distances, but retains much backward compatibility. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET.", "title": "" }, { "paragraph_id": 1, "text": "The original 10BASE5 Ethernet uses a thick coaxial cable as a shared medium. This was largely superseded by 10BASE2, which used a thinner and more flexible cable that was both cheaper and easier to use. More modern Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 Mbit/s to the latest 400 Gbit/s, with rates up to 1.6 Tbit/s under development. The Ethernet standards include several wiring and signaling variants of the OSI physical layer.", "title": "" }, { "paragraph_id": 2, "text": "Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses, and error-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols trigger retransmission of lost frames. Per the OSI model, Ethernet provides services up to and including the data link layer. The 48-bit MAC address was adopted by other IEEE 802 networking standards, including IEEE 802.11 (Wi-Fi), as well as by FDDI. EtherType values are also used in Subnetwork Access Protocol (SNAP) headers.", "title": "" }, { "paragraph_id": 3, "text": "Ethernet is widely used in homes and industry, and interworks well with wireless Wi-Fi technologies. The Internet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up the Internet.", "title": "" }, { "paragraph_id": 4, "text": "Ethernet was developed at Xerox PARC between 1973 and 1974 as a means to allow Alto computers to communicate with each other. It was inspired by ALOHAnet, which Robert Metcalfe had studied as part of his PhD dissertation and was originally called the Alto Aloha Network. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an \"omnipresent, completely-passive medium for the propagation of electromagnetic waves.\" In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, and Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper. Yogen Dalal, Ron Crane, Bob Garner, and Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was released to the market in 1980.", "title": "History" }, { "paragraph_id": 5, "text": "Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation (DEC), Intel, and Xerox to work together to promote Ethernet as a standard. As part of that process Xerox agreed to relinquish their 'Ethernet' trademark. The first standard was published on September 30, 1980, as \"The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications\". This so-called DIX standard (Digital Intel Xerox) specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983.", "title": "History" }, { "paragraph_id": 6, "text": "Ethernet initially competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market needs and with 10BASE2, shift to inexpensive thin coaxial cable and from 1990, to the now-ubiquitous twisted pair with 10BASE-T. By the end of the 1980s, Ethernet was clearly the dominant network technology. In the process, 3Com became a major company. 3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, and that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed quickly by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. In the 1980s, IBM's own PC Network product competed with Ethernet for the PC, and through the 1980s, LAN hardware, in general, was not common on PCs. However, in the mid to late 1980s, PC networking did become popular in offices and schools for printer and fileserver sharing, and among the many diverse competing LAN technologies of that decade, Ethernet was one of the most popular. Parallel port based Ethernet adapters were produced for a time, with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that Ethernet ports began to appear on some PCs and most workstations. This process was greatly sped up with the introduction of 10BASE-T and its relatively small modular connector, at which point Ethernet ports appeared even on low-end motherboards.", "title": "History" }, { "paragraph_id": 7, "text": "Since then, Ethernet technology has evolved to meet new bandwidth and market requirements. In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is quickly replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year.", "title": "History" }, { "paragraph_id": 8, "text": "In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). The \"DIX-group\" with Gary Robinson (DEC), Phil Arst (Intel), and Bob Printis (Xerox) submitted the so-called \"Blue Book\" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward supported by General Motors) were also considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, and standardization proceeded separately for each proposal.", "title": "Standardization" }, { "paragraph_id": 9, "text": "Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products. With such business implications in mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office communication market, including Siemens' support for the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens' representative to IEEE 802, quickly achieved broader support for Ethernet beyond IEEE by the establishment of a competing Task Group \"Local Networks\" within the European standards body ECMA TC24. In March 1982, ECMA TC24 with its corporate members reached an agreement on a standard for CSMA/CD based on the IEEE 802 draft. Because the DIX proposal was most technically complete and because of the speedy action taken by ECMA which decisively contributed to the conciliation of opinions within IEEE, the IEEE 802.3 CSMA/CD standard was approved in December 1982. IEEE published the 802.3 standard as a draft in 1983 and as a standard in 1985.", "title": "Standardization" }, { "paragraph_id": 10, "text": "Approval of Ethernet on the international level was achieved by a similar, cross-partisan action with Fromm as the liaison officer working to integrate with International Electrotechnical Commission (IEC) Technical Committee 83 and International Organization for Standardization (ISO) Technical Committee 97 Sub Committee 6. The ISO 8802-3 standard was published in 1989.", "title": "Standardization" }, { "paragraph_id": 11, "text": "Ethernet has evolved to include higher bandwidth, improved medium access control methods, and different physical media. The multidrop coaxial cable was replaced with physical point-to-point links connected by Ethernet repeaters or switches.", "title": "Evolution" }, { "paragraph_id": 12, "text": "Ethernet stations communicate by sending each other data packets: blocks of data individually sent and delivered. As with other IEEE 802 LANs, adapters come programmed with globally unique 48-bit MAC address so that each Ethernet station has a unique address. The MAC addresses are used to specify both the destination and the source of each data packet. Ethernet establishes link-level connections, which can be defined using both the destination and source addresses. On reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Ethernet stations.", "title": "Evolution" }, { "paragraph_id": 13, "text": "An EtherType field in each frame is used by the operating system on the receiving station to select the appropriate protocol module (e.g., an Internet Protocol version such as IPv4). Ethernet frames are said to be self-identifying, because of the EtherType field. Self-identifying frames make it possible to intermix multiple protocols on the same physical network and allow a single computer to use multiple protocols together. Despite the evolution of Ethernet technology, all generations of Ethernet (excluding early experimental versions) use the same frame formats. Mixed-speed networks can be built using Ethernet switches and repeaters supporting the desired Ethernet variants.", "title": "Evolution" }, { "paragraph_id": 14, "text": "Due to the ubiquity of Ethernet, and the ever-decreasing cost of the hardware needed to support it, by 2004 most manufacturers built Ethernet interfaces directly into PC motherboards, eliminating the need for a separate network card.", "title": "Evolution" }, { "paragraph_id": 15, "text": "Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The method used was similar to those used in radio systems, with the common cable providing the communication channel likened to the Luminiferous aether in 19th-century physics, and it was from this reference that the name \"Ethernet\" was derived.", "title": "Evolution" }, { "paragraph_id": 16, "text": "Original Ethernet's shared coaxial cable (the shared medium) traversed a building or campus to every attached machine. A scheme known as carrier-sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than competing Token Ring or Token Bus technologies. Computers are connected to an Attachment Unit Interface (AUI) transceiver, which is in turn connected to the cable (with thin Ethernet the transceiver is usually integrated into the network adapter). While a simple passive wire is highly reliable for small networks, it is not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, can make the whole Ethernet segment unusable.", "title": "Evolution" }, { "paragraph_id": 17, "text": "Through the first half of the 1980s, Ethernet's 10BASE5 implementation used a coaxial cable 0.375 inches (9.5 mm) in diameter, later called thick Ethernet or thicknet. Its successor, 10BASE2, called thin Ethernet or thinnet, used the RG-58 coaxial cable. The emphasis was on making installation of the cable easier and less costly.", "title": "Evolution" }, { "paragraph_id": 18, "text": "Since all communication happens on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it. Use of a single cable also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are simultaneously active.", "title": "Evolution" }, { "paragraph_id": 19, "text": "A collision happens when two stations attempt to transmit at the same time. They corrupt transmitted data and require stations to re-transmit. The lost data and re-transmission reduces throughput. In the worst case, where multiple active hosts connected with maximum allowed cable length attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 studied performance of an existing Ethernet installation under both normal and artificially generated heavy load. The report claimed that 98% throughput on the LAN was observed. This is in contrast with token passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits. This report was controversial, as modeling showed that collision-based networks theoretically became unstable under loads as low as 37% of nominal capacity. Many early researchers failed to understand these results. Performance on real networks is significantly better.", "title": "Evolution" }, { "paragraph_id": 20, "text": "In a modern Ethernet, the stations do not all share one channel through a shared cable or a simple repeater hub; instead, each station communicates with a switch, which in turn forwards that traffic to the destination station. In this topology, collisions are only possible if station and switch attempt to communicate with each other at the same time, and collisions are limited to this link. Furthermore, the 10BASE-T standard introduced a full duplex mode of operation which became common with Fast Ethernet and the de facto standard with Gigabit Ethernet. In full duplex, switch and station can send and receive simultaneously, and therefore modern Ethernets are completely collision-free.", "title": "Evolution" }, { "paragraph_id": 21, "text": "For signal degradation and timing reasons, coaxial Ethernet segments have a restricted size. Somewhat larger networks can be built by using an Ethernet repeater. Early repeaters had only two ports, allowing, at most, a doubling of network size. Once repeaters with more than two ports became available, it was possible to wire the network in a star topology. Early experiments with star topologies (called Fibernet) using optical fiber were published by 1978.", "title": "Evolution" }, { "paragraph_id": 22, "text": "Shared cable Ethernet is always hard to install in offices because its bus topology is in conflict with the star topology cable plans designed into buildings for telephony. Modifying Ethernet to conform to twisted pair telephone wiring already installed in commercial buildings provided another opportunity to lower costs, expand the installed base, and leverage building design, and, thus, twisted-pair Ethernet was the next logical development in the mid-1980s.", "title": "Evolution" }, { "paragraph_id": 23, "text": "Ethernet on unshielded twisted-pair cables (UTP) began with StarLAN at 1 Mbit/s in the mid-1980s. In 1987 SynOptics introduced the first twisted-pair Ethernet at 10 Mbit/s in a star-wired cabling topology with a central hub, later called LattisNet. These evolved into 10BASE-T, which was designed for point-to-point links only, and all termination was built into the device. This changed repeaters from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks easier to maintain by preventing most faults with one peer or its associated cable from affecting other devices on the network.", "title": "Evolution" }, { "paragraph_id": 24, "text": "Despite the physical star topology and the presence of separate transmit and receive channels in the twisted pair and fiber media, repeater-based Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the repeater, primarily generation of the jam signal in dealing with packet collisions. Every packet is sent to every other port on the repeater, so bandwidth and security problems are not addressed. The total throughput of the repeater is limited to that of a single link, and all links must operate at the same speed.", "title": "Evolution" }, { "paragraph_id": 25, "text": "While repeaters can isolate some aspects of Ethernet segments, such as cable breakages, they still forward all traffic to all Ethernet devices. The entire network is one collision domain, and all hosts have to be able to detect collisions anywhere on the network. This limits the number of repeaters between the farthest nodes and creates practical limits on how many machines can communicate on an Ethernet network. Segments joined by repeaters have to all operate at the same speed, making phased-in upgrades impossible.", "title": "Evolution" }, { "paragraph_id": 26, "text": "To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed Ethernet packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. At initial startup, Ethernet bridges work somewhat like Ethernet repeaters, passing all traffic between segments. By observing the source addresses of incoming frames, the bridge then builds an address table associating addresses to segments. Once an address is learned, the bridge forwards network traffic destined for that address only to the associated segment, improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcome the limits on total segments between two hosts and allow the mixing of speeds, both of which are critical to the incremental deployment of faster Ethernet variants.", "title": "Evolution" }, { "paragraph_id": 27, "text": "In 1989, Motorola Codex introduced their 6310 EtherSpan, and Kalpana introduced their EtherSwitch; these were examples of the first commercial Ethernet switches. Early switches such as this used cut-through switching where only the header of the incoming packet is examined before it is either dropped or forwarded to another segment. This reduces the forwarding latency. One drawback of this method is that it does not readily allow a mixture of different link speeds. Another is that packets that have been corrupted are still propagated through the network. The eventual remedy for this was a return to the original store and forward approach of bridging, where the packet is read into a buffer on the switch in its entirety, its frame check sequence verified and only then the packet is forwarded. In modern network equipment, this process is typically done using application-specific integrated circuits allowing packets to be forwarded at wire speed.", "title": "Evolution" }, { "paragraph_id": 28, "text": "When a twisted pair or fiber link segment is used and neither end is connected to a repeater, full-duplex Ethernet becomes possible over that segment. In full-duplex mode, both devices can transmit and receive to and from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (for example, 200 Mbit/s for Fast Ethernet). The elimination of the collision domain for these connections also means that all the link's bandwidth can be used by the two devices on that segment and that segment length is not limited by the constraints of collision detection.", "title": "Evolution" }, { "paragraph_id": 29, "text": "Since packets are typically delivered only to the port they are intended for, traffic on a switched Ethernet is less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding.", "title": "Evolution" }, { "paragraph_id": 30, "text": "The bandwidth advantages, the improved isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.", "title": "Evolution" }, { "paragraph_id": 31, "text": "Simple switched Ethernet networks, while a great improvement over repeater-based Ethernet, suffer from single points of failure, attacks that trick switches or hosts into sending data to a machine even if it is not intended for it, scalability and security issues with regard to switching loops, broadcast radiation, and multicast traffic.", "title": "Evolution" }, { "paragraph_id": 32, "text": "Advanced networking features in switches use Shortest Path Bridging (SPB) or the Spanning Tree Protocol (STP) to maintain a loop-free, meshed network, allowing physical loops for redundancy (STP) or load-balancing (SPB). Shortest Path Bridging includes the use of the link-state routing protocol IS-IS to allow larger networks with shortest path routes between devices.", "title": "Evolution" }, { "paragraph_id": 33, "text": "Advanced networking features also ensure port security, provide protection features such as MAC lockdown and broadcast radiation filtering, use VLANs to keep different classes of users separate while using the same physical infrastructure, employ multilayer switching to route between different classes, and use link aggregation to add bandwidth to overloaded links and to provide some redundancy.", "title": "Evolution" }, { "paragraph_id": 34, "text": "In 2016, Ethernet replaced InfiniBand as the most popular system interconnect of TOP500 supercomputers.", "title": "Evolution" }, { "paragraph_id": 35, "text": "The Ethernet physical layer evolved over a considerable time span and encompasses coaxial, twisted pair and fiber-optic physical media interfaces, with speeds from 1 Mbit/s to 400 Gbit/s. The first introduction of twisted-pair CSMA/CD was StarLAN, standardized as 802.3 1BASE5. While 1BASE5 had little market penetration, it defined the physical apparatus (wire, plug/jack, pin-out, and wiring plan) that would be carried over to 10BASE-T through 10GBASE-T.", "title": "Varieties" }, { "paragraph_id": 36, "text": "The most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three use twisted-pair cables and 8P8C modular connectors. They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively.", "title": "Varieties" }, { "paragraph_id": 37, "text": "Fiber optic variants of Ethernet (that commonly use SFP modules) are also very popular in larger networks, offering high performance, better electrical isolation and longer distance (tens of kilometers with some versions). In general, network protocol stack software will work similarly on all varieties.", "title": "Varieties" }, { "paragraph_id": 38, "text": "In IEEE 802.3, a datagram is called a packet or frame. Packet is used to describe the overall transmission unit and includes the preamble, start frame delimiter (SFD) and carrier extension (if present). The frame begins after the start frame delimiter with a frame header featuring source and destination MAC addresses and the EtherType field giving either the protocol type for the payload protocol or the length of the payload. The middle section of the frame consists of payload data including any headers for other protocols (for example, Internet Protocol) carried in the frame. The frame ends with a 32-bit cyclic redundancy check, which is used to detect corruption of data in transit. Notably, Ethernet packets have no time-to-live field, leading to possible problems in the presence of a switching loop.", "title": "Frame structure" }, { "paragraph_id": 39, "text": "Autonegotiation is the procedure by which two connected devices choose common transmission parameters, e.g. speed and duplex mode. Autonegotiation was initially an optional feature, first introduced with 100BASE-TX, while it is also backward compatible with 10BASE-T. Autonegotiation is mandatory for 1000BASE-T and faster.", "title": "Autonegotiation" }, { "paragraph_id": 40, "text": "A switching loop or bridge loop occurs in computer networks when there is more than one Layer 2 (OSI model) path between two endpoints (e.g. multiple connections between two network switches or two ports on the same switch connected to each other). The loop creates broadcast storms as broadcasts and multicasts are forwarded by switches out every port, the switch or switches will repeatedly rebroadcast the broadcast messages flooding the network. Since the Layer 2 header does not support a time to live (TTL) value, if a frame is sent into a looped topology, it can loop forever.", "title": "Error conditions" }, { "paragraph_id": 41, "text": "A physical topology that contains switching or bridge loops is attractive for redundancy reasons, yet a switched network must not have loops. The solution is to allow physical loops, but create a loop-free logical topology using the SPB protocol or the older STP on the network switches.", "title": "Error conditions" }, { "paragraph_id": 42, "text": "A node that is sending longer than the maximum transmission window for an Ethernet packet is considered to be jabbering. Depending on the physical topology, jabber detection and remedy differ somewhat.", "title": "Error conditions" } ]
Ethernet is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3. Ethernet has since been refined to support higher bit rates, a greater number of nodes, and longer link distances, but retains much backward compatibility. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET. The original 10BASE5 Ethernet uses a thick coaxial cable as a shared medium. This was largely superseded by 10BASE2, which used a thinner and more flexible cable that was both cheaper and easier to use. More modern Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 Mbit/s to the latest 400 Gbit/s, with rates up to 1.6 Tbit/s under development. The Ethernet standards include several wiring and signaling variants of the OSI physical layer. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses, and error-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols trigger retransmission of lost frames. Per the OSI model, Ethernet provides services up to and including the data link layer. The 48-bit MAC address was adopted by other IEEE 802 networking standards, including IEEE 802.11 (Wi-Fi), as well as by FDDI. EtherType values are also used in Subnetwork Access Protocol (SNAP) headers. Ethernet is widely used in homes and industry, and interworks well with wireless Wi-Fi technologies. The Internet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up the Internet.
2001-11-04T20:22:41Z
2023-12-24T04:17:52Z
[ "Template:Cite web", "Template:Basic computer components", "Template:Short description", "Template:IPAc-en", "Template:Convert", "Template:Cite press release", "Template:Authority control", "Template:Use mdy dates", "Template:Respell", "Template:Main", "Template:Div col", "Template:Cite AV media", "Template:Internet access", "Template:Circa", "Template:Cite news", "Template:Use American English", "Template:Val", "Template:Rp", "Template:US patent", "Template:Commons category", "Template:Ethernet", "Template:Efn", "Template:IPstack", "Template:Div col end", "Template:Cite book", "Template:Cite journal", "Template:Citation", "Template:Notelist", "Template:Cbignore", "Template:Cite magazine", "Template:Dead link", "Template:Citation needed", "Template:Nowrap", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Ethernet
9,502
List of explorations
Some of the most important explorations of State Societies, in chronological order:
[ { "paragraph_id": 0, "text": "Some of the most important explorations of State Societies, in chronological order:", "title": "" } ]
Some of the most important explorations of State Societies, in chronological order:
2001-06-11T17:14:15Z
2023-10-22T20:20:30Z
[ "Template:Short description" ]
https://en.wikipedia.org/wiki/List_of_explorations
9,505
Elias Canetti
Elias Canetti (Bulgarian: Елиас Канети; 25 July 1905 – 14 August 1994; /kəˈnɛti, kɑː-/; German pronunciation: [eˈliːas kaˈnɛti]) was a German-language writer, born in Ruse, Bulgaria to a Sephardic family. They moved to Manchester, England, but his father died in 1912, and his mother took her three sons back to continental Europe. They settled in Vienna. Canetti moved to England in 1938 after the Anschluss to escape Nazi persecution. He became a British citizen in 1952. He is known as a modernist novelist, playwright, memoirist, and nonfiction writer. He won the Nobel Prize in Literature in 1981, "for writings marked by a broad outlook, a wealth of ideas and artistic power". He is noted for his nonfiction book Crowds and Power, among other works. Born in 1905 to businessman Jacques Canetti and Mathilde née Arditti in Ruse, a city on the Danube in Bulgaria, Canetti was the eldest of three sons. His ancestors were Sephardic Jews. His paternal ancestors settled in Ruse from Ottoman Adrianople. The original family name was Cañete, named after Cañete, Cuenca, a village in Spain. In Ruse, Canetti's father and grandfather were successful merchants who operated out of a commercial building, which they had built in 1898. Canetti's mother descended from the Arditti family, one of the oldest Sephardic families in Bulgaria, who were among the founders of the Ruse Jewish colony in the late 18th century. The Ardittis can be traced to the 14th century, when they were court physicians and astronomers to the Aragonese royal court of Alfonso IV and Pedro IV. Before settling in Ruse, they had migrated into Italy and lived in Livorno in the 17th century. Canetti spent his childhood years, from 1905 to 1911, in Ruse until the family moved to Manchester, England, where Canetti's father joined a business established by his wife's brothers. In 1912, his father died suddenly, and his mother moved with their children first to Lausanne, then Vienna in the same year. They lived in Vienna from the time Canetti was aged seven onwards. His mother insisted that he speak German and taught it to him. By this time, Canetti already spoke Ladino (his native language), Bulgarian, English, and some French; the last two he studied in the one year that they were in Britain. Subsequently, the family moved first (from 1916 to 1921) to Zürich and then (until 1924) to Frankfurt, where Canetti graduated from high school. Canetti went back to Vienna in 1924 in order to study chemistry. However, his primary interests during his years in Vienna became philosophy and literature. Introduced into the literary circles of First Republic Vienna, he started writing. Politically leaning towards the left, he was present at the July Revolt of 1927, came near to the action accidentally, was most impressed by the burning of books (recalled frequently in his writings) and left the place quickly with his bicycle. He received a doctorate in chemistry from the University of Vienna in 1929, but never worked as a chemist. He published two works in Vienna, Komödie der Eitelkeit 1934 (The Comedy of Vanity) and Die Blendung 1935 (Auto-da-Fé, 1935), before escaping to Great Britain. He reflected the experiences of Nazi Germany and political chaos in his works, especially exploring mob action and group thinking in the novel Die Blendung and in the non-fiction Crowds and Power (1960). He wrote several volumes of memoirs, contemplating the influence of his multi-lingual background and childhood. In 1934 in Vienna he married Veza (Venetiana) Taubner-Calderon (1897–1963), who acted as his muse and devoted literary assistant. Canetti remained open to relationships with other women. He had a short affair with the sculptor Anna Mahler, the daughter of the composer Gustav Mahler. In 1938, after the Anschluss with Germany, the Canettis moved to London. He became closely involved with the painter Marie-Louise von Motesiczky, who was to remain a close companion for many years. His name has also been linked with the author Iris Murdoch (see John Bayley's Iris, A Memoir of Iris Murdoch, which has several references to an author, referred to as "the Dichter", who was a Nobel Laureate and whose works included Die Blendung [English title Auto-da-Fé]). After Veza died in 1963, Canetti married Hera Buschor (1933–1988), with whom he had a daughter, Johanna, in 1972. Canetti's brother Jacques Canetti settled in Paris, where he championed a revival of French chanson. Despite being a German-language writer, Canetti settled in Britain until the 1970s, receiving British citizenship in 1952. For his last 20 years, Canetti lived mostly in Zürich. A writer in German, Canetti won the Nobel Prize in Literature in 1981, "for writings marked by a broad outlook, a wealth of ideas and artistic power". He is known chiefly for his celebrated trilogy of autobiographical memoirs of his childhood and of pre-Anschluss Vienna: Die Gerettete Zunge (The Tongue Set Free); Die Fackel im Ohr (The Torch in My Ear), and Das Augenspiel (The Play of the Eyes); for his modernist novel Auto-da-Fé (Die Blendung); and for Crowds and Power, a psychological study of crowd behaviour as it manifests itself in human activities ranging from mob violence to religious congregations. In the 1970s, Canetti began to travel more frequently to Zurich, where he settled and lived for his last 20 years. He died in Zürich in 1994.
[ { "paragraph_id": 0, "text": "Elias Canetti (Bulgarian: Елиас Канети; 25 July 1905 – 14 August 1994; /kəˈnɛti, kɑː-/; German pronunciation: [eˈliːas kaˈnɛti]) was a German-language writer, born in Ruse, Bulgaria to a Sephardic family. They moved to Manchester, England, but his father died in 1912, and his mother took her three sons back to continental Europe. They settled in Vienna.", "title": "" }, { "paragraph_id": 1, "text": "Canetti moved to England in 1938 after the Anschluss to escape Nazi persecution. He became a British citizen in 1952. He is known as a modernist novelist, playwright, memoirist, and nonfiction writer. He won the Nobel Prize in Literature in 1981, \"for writings marked by a broad outlook, a wealth of ideas and artistic power\". He is noted for his nonfiction book Crowds and Power, among other works.", "title": "" }, { "paragraph_id": 2, "text": "Born in 1905 to businessman Jacques Canetti and Mathilde née Arditti in Ruse, a city on the Danube in Bulgaria, Canetti was the eldest of three sons. His ancestors were Sephardic Jews. His paternal ancestors settled in Ruse from Ottoman Adrianople. The original family name was Cañete, named after Cañete, Cuenca, a village in Spain.", "title": "Life and work" }, { "paragraph_id": 3, "text": "In Ruse, Canetti's father and grandfather were successful merchants who operated out of a commercial building, which they had built in 1898. Canetti's mother descended from the Arditti family, one of the oldest Sephardic families in Bulgaria, who were among the founders of the Ruse Jewish colony in the late 18th century. The Ardittis can be traced to the 14th century, when they were court physicians and astronomers to the Aragonese royal court of Alfonso IV and Pedro IV. Before settling in Ruse, they had migrated into Italy and lived in Livorno in the 17th century.", "title": "Life and work" }, { "paragraph_id": 4, "text": "Canetti spent his childhood years, from 1905 to 1911, in Ruse until the family moved to Manchester, England, where Canetti's father joined a business established by his wife's brothers. In 1912, his father died suddenly, and his mother moved with their children first to Lausanne, then Vienna in the same year. They lived in Vienna from the time Canetti was aged seven onwards. His mother insisted that he speak German and taught it to him. By this time, Canetti already spoke Ladino (his native language), Bulgarian, English, and some French; the last two he studied in the one year that they were in Britain. Subsequently, the family moved first (from 1916 to 1921) to Zürich and then (until 1924) to Frankfurt, where Canetti graduated from high school.", "title": "Life and work" }, { "paragraph_id": 5, "text": "Canetti went back to Vienna in 1924 in order to study chemistry. However, his primary interests during his years in Vienna became philosophy and literature. Introduced into the literary circles of First Republic Vienna, he started writing. Politically leaning towards the left, he was present at the July Revolt of 1927, came near to the action accidentally, was most impressed by the burning of books (recalled frequently in his writings) and left the place quickly with his bicycle. He received a doctorate in chemistry from the University of Vienna in 1929, but never worked as a chemist.", "title": "Life and work" }, { "paragraph_id": 6, "text": "He published two works in Vienna, Komödie der Eitelkeit 1934 (The Comedy of Vanity) and Die Blendung 1935 (Auto-da-Fé, 1935), before escaping to Great Britain. He reflected the experiences of Nazi Germany and political chaos in his works, especially exploring mob action and group thinking in the novel Die Blendung and in the non-fiction Crowds and Power (1960). He wrote several volumes of memoirs, contemplating the influence of his multi-lingual background and childhood.", "title": "Life and work" }, { "paragraph_id": 7, "text": "In 1934 in Vienna he married Veza (Venetiana) Taubner-Calderon (1897–1963), who acted as his muse and devoted literary assistant. Canetti remained open to relationships with other women. He had a short affair with the sculptor Anna Mahler, the daughter of the composer Gustav Mahler. In 1938, after the Anschluss with Germany, the Canettis moved to London. He became closely involved with the painter Marie-Louise von Motesiczky, who was to remain a close companion for many years. His name has also been linked with the author Iris Murdoch (see John Bayley's Iris, A Memoir of Iris Murdoch, which has several references to an author, referred to as \"the Dichter\", who was a Nobel Laureate and whose works included Die Blendung [English title Auto-da-Fé]).", "title": "Personal life" }, { "paragraph_id": 8, "text": "After Veza died in 1963, Canetti married Hera Buschor (1933–1988), with whom he had a daughter, Johanna, in 1972. Canetti's brother Jacques Canetti settled in Paris, where he championed a revival of French chanson. Despite being a German-language writer, Canetti settled in Britain until the 1970s, receiving British citizenship in 1952. For his last 20 years, Canetti lived mostly in Zürich.", "title": "Personal life" }, { "paragraph_id": 9, "text": "A writer in German, Canetti won the Nobel Prize in Literature in 1981, \"for writings marked by a broad outlook, a wealth of ideas and artistic power\". He is known chiefly for his celebrated trilogy of autobiographical memoirs of his childhood and of pre-Anschluss Vienna: Die Gerettete Zunge (The Tongue Set Free); Die Fackel im Ohr (The Torch in My Ear), and Das Augenspiel (The Play of the Eyes); for his modernist novel Auto-da-Fé (Die Blendung); and for Crowds and Power, a psychological study of crowd behaviour as it manifests itself in human activities ranging from mob violence to religious congregations.", "title": "Personal life" }, { "paragraph_id": 10, "text": "In the 1970s, Canetti began to travel more frequently to Zurich, where he settled and lived for his last 20 years. He died in Zürich in 1994.", "title": "Personal life" } ]
Elias Canetti was a German-language writer, born in Ruse, Bulgaria to a Sephardic family. They moved to Manchester, England, but his father died in 1912, and his mother took her three sons back to continental Europe. They settled in Vienna. Canetti moved to England in 1938 after the Anschluss to escape Nazi persecution. He became a British citizen in 1952. He is known as a modernist novelist, playwright, memoirist, and nonfiction writer. He won the Nobel Prize in Literature in 1981, "for writings marked by a broad outlook, a wealth of ideas and artistic power". He is noted for his nonfiction book Crowds and Power, among other works.
2001-06-08T20:55:33Z
2023-12-21T21:50:00Z
[ "Template:IPA-de", "Template:Cite book", "Template:ISBN", "Template:Commons category", "Template:Authority control", "Template:Short description", "Template:Reflist", "Template:Cite web", "Template:Cite journal", "Template:OL author", "Template:German literature", "Template:Gottfried-Keller-Preis winners", "Template:Use dmy dates", "Template:Lang-bg", "Template:Wikiquote", "Template:Perlentaucher", "Template:Internet Archive author", "Template:Georg Büchner Prize", "Template:Nobel Prize in Literature Laureates 1976-2000", "Template:Infobox writer", "Template:IPAc-en", "Template:Cn", "Template:Issn", "Template:Cite news", "Template:Nobelprize", "Template:1981 Nobel Prize winners" ]
https://en.wikipedia.org/wiki/Elias_Canetti
9,506
Edward Jenner
Edward Jenner FRS FRCPE (17 May 1749 – 26 January 1823) was an English physician and scientist who pioneered the concept of vaccines and created the smallpox vaccine, the world's first vaccine. The terms vaccine and vaccination are derived from Variolae vaccinae ('pustules of the cow'), the term devised by Jenner to denote cowpox. He used it in 1798 in the title of his Inquiry into the Variolae vaccinae known as the Cow Pox, in which he described the protective effect of cowpox against smallpox. In the West, Jenner is often called "the father of immunology", and his work is said to have saved "more lives than any other man". In Jenner's time, smallpox killed around 10% of global population, with the number as high as 20% in towns and cities where infection spread more easily. In 1821, he was appointed physician to King George IV, and was also made mayor of Berkeley and justice of the peace. He was a member of the Royal Society. In the field of zoology, he was among the first modern scholars to describe the brood parasitism of the cuckoo (Aristotle also noted this behaviour in his History of Animals). In 2002, Jenner was named in the BBC's list of the 100 Greatest Britons. Edward Jenner was born on 17 May 1749 in Berkeley, Gloucestershire, England as the eighth of nine children. His father, the Reverend Stephen Jenner, was the vicar of Berkeley, so Jenner received a strong basic education. When he was young, he went to school in Wotton-under-Edge at Katherine Lady Berkeley's School and in Cirencester. During this time, he was inoculated (by variolation) for smallpox, which had a lifelong effect upon his general health. At the age of 14, he was apprenticed for seven years to Daniel Ludlow, a surgeon of Chipping Sodbury, South Gloucestershire, where he gained most of the experience needed to become a surgeon himself. In 1770, aged 21, Jenner became apprenticed in surgery and anatomy under surgeon John Hunter and others at St George's Hospital, London. William Osler records that Hunter gave Jenner William Harvey's advice, well known in medical circles (and characteristic of the Age of Enlightenment), "Don't think; try." Hunter remained in correspondence with Jenner over natural history and proposed him for the Royal Society. Returning to his native countryside by 1773, Jenner became a successful family doctor and surgeon, practising on dedicated premises at Berkeley. In 1792, "with twenty years' experience of general practice and surgery, Jenner obtained the degree of MD from the University of St Andrews". Jenner and others formed the Fleece Medical Society or Gloucestershire Medical Society, so called because it met in the parlour of the Fleece Inn, Rodborough, Gloucestershire. Members dined together and read papers on medical subjects. Jenner contributed papers on angina pectoris, ophthalmia, and cardiac valvular disease and commented on cowpox. He also belonged to a similar society which met in Alveston, near Bristol. He became a master mason on 30 December 1802, in Lodge of Faith and Friendship #449. From 1812 to 1813, he served as worshipful master of Royal Berkeley Lodge of Faith and Friendship. Jenner was elected fellow of the Royal Society in 1788, following his publication of a careful study of the previously misunderstood life of the nested cuckoo, a study that combined observation, experiment, and dissection. Jenner described how the newly hatched cuckoo pushed its host's eggs and fledgling chicks out of the nest (contrary to existing belief that the adult cuckoo did it). Having observed this behaviour, Jenner demonstrated an anatomical adaptation for it – the baby cuckoo has a depression in its back, not present after 12 days of life, that enables it to cup eggs and other chicks. The adult does not remain long enough in the area to perform this task. Jenner's findings were published in Philosophical Transactions of the Royal Society in 1788. "The singularity of its shape is well adapted to these purposes; for, different from other newly hatched birds, its back from the scapula downwards is very broad, with a considerable depression in the middle. This depression seems formed by nature for the design of giving a more secure lodgement to the egg of the Hedge-sparrow, or its young one, when the young Cuckoo is employed in removing either of them from the nest. When it is about twelve days old, this cavity is quite filled up, and then the back assumes the shape of nestling birds in general." Jenner's nephew assisted in the study. He was born on 30 June 1737. Jenner's understanding of the cuckoo's behaviour was not entirely believed until the artist Jemima Blackburn, a keen observer of birdlife, saw a blind nestling pushing out a host's egg. Blackburn's description and illustration were enough to convince Charles Darwin to revise a later edition of On the Origin of Species. Jenner's interest in zoology played a large role in his first experiment with inoculation. Not only did he have a profound understanding of human anatomy due to his medical training, but he also understood animal biology and its role in human-animal trans-species boundaries in disease transmission. At the time, there was no way of knowing how important this connection would be to the history and discovery of vaccinations. We see this connection now; many present-day vaccinations include animal parts from cows, rabbits, and chicken eggs, which can be attributed to the work of Jenner and his cowpox/smallpox vaccination. Jenner married Catherine Kingscote (who died in 1815 from tuberculosis) in March 1788. He might have met her while he and other fellows were experimenting with balloons. Jenner's trial balloon descended into Kingscote Park, Gloucestershire, owned by Catherine's father Anthony Kingscote. They had three children: Edward Robert (1789–1810), Robert Fitzharding (1792–1854) and Catherine (1794–1833). He earned his MD from the University of St Andrews in 1792. He is credited with advancing the understanding of angina pectoris. In his correspondence with Heberden, he wrote: "How much the heart must suffer from the coronary arteries not being able to perform their functions". Inoculation was already a standard practice in Asian and African medicine but involved serious risks, including the possibility that those inoculated would become contagious and spread the disease to others. In 1721, Lady Mary Wortley Montagu had imported variolation to Britain after having observed it in Istanbul. While Johnnie Notions had great success with his self-devised inoculation (and was reputed not to have lost a single patient), his method's practice was limited to the Shetland Isles. Voltaire wrote that at this time 60% of the population caught smallpox and 20% of the population died from it. Voltaire also states that the Circassians used the inoculation from times immemorial, and the custom may have been borrowed by the Turks from the Circassians. In 1766, Daniel Bernoulli analysed smallpox morbidity and mortality data to demonstrate the efficacy of inoculation. By 1768, English physician John Fewster had realised that prior infection with cowpox rendered a person immune to smallpox. In the years following 1770, at least five investigators in England and Germany (Sevel, Jensen, Jesty 1774, Rendell, Plett 1791) successfully tested in humans a cowpox vaccine against smallpox. For example, Dorset farmer Benjamin Jesty successfully vaccinated and presumably induced immunity with cowpox in his wife and two children during a smallpox epidemic in 1774, but it was not until Jenner's work that the procedure became widely understood. Jenner may have been aware of Jesty's procedures and success. A similar observation was later made in France by Jacques Antoine Rabaut-Pommier in 1780. Jenner postulated that the pus in the blisters that affected individuals affected by cowpox (a disease similar to smallpox, but much less virulent) protected them from smallpox. On 14 May 1796, Jenner tested his hypothesis by inoculating James Phipps, an eight-year-old boy who was the son of Jenner's gardener. He scraped pus from cowpox blisters on the hands of Sarah Nelmes, a milkmaid who had caught cowpox from a cow called Blossom, whose hide now hangs on the wall of the St. George's Medical School library (now in Tooting). Phipps was the 17th case described in Jenner's first paper on vaccination. Jenner inoculated Phipps in both arms that day, subsequently producing in Phipps a fever and some uneasiness, but no full-blown infection. Later, he injected Phipps with variolous material, the routine method of immunization at that time. No disease followed. The boy was later challenged with variolous material and again showed no sign of infection. No unexpected side effects occurred, and neither Phipps nor any other recipients underwent any future 'breakthrough' cases. Jenner's biographer John Baron would later speculate that Jenner understood one could be inoculated against smallpox by being exposed to cowpox by observing the unblemished complexion of milkmaids, rather than building on the work of his predecessors. The milkmaids story is still widely repeated even though it appears to be a myth. Donald Hopkins has written, "Jenner's unique contribution was not that he inoculated a few persons with cowpox, but that he then proved [by subsequent challenges] that they were immune to smallpox. Moreover, he demonstrated that the protective cowpox pus could be effectively inoculated from person to person, not just directly from cattle." Jenner successfully tested his hypothesis on 23 additional subjects. Jenner continued his research and reported it to the Royal Society, which did not publish the initial paper. After revisions and further investigations, he published his findings on the 23 cases, including his 11-month-old son Robert. Some of his conclusions were correct, some erroneous; modern microbiological and microscopic methods would make his studies easier to reproduce. The medical establishment deliberated at length over his findings before accepting them. Eventually, vaccination was accepted, and in 1840, the British government banned variolation – the use of smallpox to induce immunity – and provided vaccination using cowpox free of charge (see Vaccination Act). The success of his discovery soon spread around Europe and was used en masse in the Spanish Balmis Expedition (1803–1806), a three-year-long mission to the Americas, the Philippines, Macao, China, led by Francisco Javier de Balmis with the aim of giving thousands the smallpox vaccine. The expedition was successful, and Jenner wrote: "I don't imagine the annals of history furnish an example of philanthropy so noble, so extensive as this". Napoleon, who at the time was at war with Britain, had all his French troops vaccinated, awarded Jenner a medal, and at the request of Jenner, he released two English prisoners of war and permitted their return home. Napoleon remarked he could not "refuse anything to one of the greatest benefactors of mankind". Jenner's continuing work on vaccination prevented him from continuing his ordinary medical practice. He was supported by his colleagues and the King in petitioning Parliament, and was granted £10,000 in 1802 for his work on vaccination. In 1807, he was granted another £20,000 after the Royal College of Physicians confirmed the widespread efficacy of vaccination. Jenner was later elected a foreign honorary member of the American Academy of Arts and Sciences in 1802, a member of the American Philosophical Society in 1804, and a foreign member of the Royal Swedish Academy of Sciences in 1806. In 1803 in London, he became president of the Jennerian Society, concerned with promoting vaccination to eradicate smallpox. The Jennerian ceased operations in 1809. Jenner became a member of the Medical and Chirurgical Society on its founding in 1805 (now the Royal Society of Medicine) and presented several papers there. In 1808, with government aid, the National Vaccine Establishment was founded, but Jenner felt dishonoured by the men selected to run it and resigned his directorship. Returning to London in 1811, Jenner observed a significant number of cases of smallpox after vaccination. He found that in these cases the severity of the illness was notably diminished by previous vaccination. In 1821, he was appointed physician extraordinary to King George IV, and was also made mayor of Berkeley and magistrate (justice of the peace). He continued to investigate natural history, and in 1823, the last year of his life, he presented his "Observations on the Migration of Birds" to the Royal Society. Jenner was a Freemason. Jenner was found in a state of apoplexy on 25 January 1823, with his right side paralysed. He did not recover and died the next day of an apparent stroke, his second, on 26 January 1823, aged 73. He was buried in the family vault at the Church of St Mary, Berkeley. Neither fanatic nor lax, Jenner was a Christian who in his personal correspondence showed himself quite spiritual. Some days before his death, he stated to a friend: "I am not surprised that men are not grateful to me; but I wonder that they are not grateful to God for the good which He has made me the instrument of conveying to my fellow creatures". In 1980, the World Health Organization declared smallpox an eradicated disease. This was the result of coordinated public health efforts, but vaccination was an essential component. Although the disease was declared eradicated, some pus samples still remain in laboratories in Centers for Disease Control and Prevention in Atlanta in the US, and in State Research Center of Virology and Biotechnology VECTOR in Koltsovo, Novosibirsk Oblast, Russia. Jenner's vaccine laid the foundation for contemporary discoveries in immunology. In 2002, Jenner was named in the BBC's list of the 100 Greatest Britons following a UK-wide vote. Commemorated on postage stamps issued by the Royal Mail, in 1999 he featured in their World Changers issue along with Charles Darwin, Michael Faraday and Alan Turing. The lunar crater Jenner is named in his honour.
[ { "paragraph_id": 0, "text": "Edward Jenner FRS FRCPE (17 May 1749 – 26 January 1823) was an English physician and scientist who pioneered the concept of vaccines and created the smallpox vaccine, the world's first vaccine. The terms vaccine and vaccination are derived from Variolae vaccinae ('pustules of the cow'), the term devised by Jenner to denote cowpox. He used it in 1798 in the title of his Inquiry into the Variolae vaccinae known as the Cow Pox, in which he described the protective effect of cowpox against smallpox.", "title": "" }, { "paragraph_id": 1, "text": "In the West, Jenner is often called \"the father of immunology\", and his work is said to have saved \"more lives than any other man\". In Jenner's time, smallpox killed around 10% of global population, with the number as high as 20% in towns and cities where infection spread more easily. In 1821, he was appointed physician to King George IV, and was also made mayor of Berkeley and justice of the peace. He was a member of the Royal Society. In the field of zoology, he was among the first modern scholars to describe the brood parasitism of the cuckoo (Aristotle also noted this behaviour in his History of Animals). In 2002, Jenner was named in the BBC's list of the 100 Greatest Britons.", "title": "" }, { "paragraph_id": 2, "text": "Edward Jenner was born on 17 May 1749 in Berkeley, Gloucestershire, England as the eighth of nine children. His father, the Reverend Stephen Jenner, was the vicar of Berkeley, so Jenner received a strong basic education.", "title": "Early life" }, { "paragraph_id": 3, "text": "When he was young, he went to school in Wotton-under-Edge at Katherine Lady Berkeley's School and in Cirencester. During this time, he was inoculated (by variolation) for smallpox, which had a lifelong effect upon his general health. At the age of 14, he was apprenticed for seven years to Daniel Ludlow, a surgeon of Chipping Sodbury, South Gloucestershire, where he gained most of the experience needed to become a surgeon himself.", "title": "Early life" }, { "paragraph_id": 4, "text": "In 1770, aged 21, Jenner became apprenticed in surgery and anatomy under surgeon John Hunter and others at St George's Hospital, London. William Osler records that Hunter gave Jenner William Harvey's advice, well known in medical circles (and characteristic of the Age of Enlightenment), \"Don't think; try.\" Hunter remained in correspondence with Jenner over natural history and proposed him for the Royal Society. Returning to his native countryside by 1773, Jenner became a successful family doctor and surgeon, practising on dedicated premises at Berkeley. In 1792, \"with twenty years' experience of general practice and surgery, Jenner obtained the degree of MD from the University of St Andrews\".", "title": "Early life" }, { "paragraph_id": 5, "text": "Jenner and others formed the Fleece Medical Society or Gloucestershire Medical Society, so called because it met in the parlour of the Fleece Inn, Rodborough, Gloucestershire. Members dined together and read papers on medical subjects. Jenner contributed papers on angina pectoris, ophthalmia, and cardiac valvular disease and commented on cowpox. He also belonged to a similar society which met in Alveston, near Bristol.", "title": "Early life" }, { "paragraph_id": 6, "text": "He became a master mason on 30 December 1802, in Lodge of Faith and Friendship #449. From 1812 to 1813, he served as worshipful master of Royal Berkeley Lodge of Faith and Friendship.", "title": "Early life" }, { "paragraph_id": 7, "text": "Jenner was elected fellow of the Royal Society in 1788, following his publication of a careful study of the previously misunderstood life of the nested cuckoo, a study that combined observation, experiment, and dissection.", "title": "Zoology" }, { "paragraph_id": 8, "text": "Jenner described how the newly hatched cuckoo pushed its host's eggs and fledgling chicks out of the nest (contrary to existing belief that the adult cuckoo did it). Having observed this behaviour, Jenner demonstrated an anatomical adaptation for it – the baby cuckoo has a depression in its back, not present after 12 days of life, that enables it to cup eggs and other chicks. The adult does not remain long enough in the area to perform this task. Jenner's findings were published in Philosophical Transactions of the Royal Society in 1788.", "title": "Zoology" }, { "paragraph_id": 9, "text": "\"The singularity of its shape is well adapted to these purposes; for, different from other newly hatched birds, its back from the scapula downwards is very broad, with a considerable depression in the middle. This depression seems formed by nature for the design of giving a more secure lodgement to the egg of the Hedge-sparrow, or its young one, when the young Cuckoo is employed in removing either of them from the nest. When it is about twelve days old, this cavity is quite filled up, and then the back assumes the shape of nestling birds in general.\" Jenner's nephew assisted in the study. He was born on 30 June 1737.", "title": "Zoology" }, { "paragraph_id": 10, "text": "Jenner's understanding of the cuckoo's behaviour was not entirely believed until the artist Jemima Blackburn, a keen observer of birdlife, saw a blind nestling pushing out a host's egg. Blackburn's description and illustration were enough to convince Charles Darwin to revise a later edition of On the Origin of Species.", "title": "Zoology" }, { "paragraph_id": 11, "text": "Jenner's interest in zoology played a large role in his first experiment with inoculation. Not only did he have a profound understanding of human anatomy due to his medical training, but he also understood animal biology and its role in human-animal trans-species boundaries in disease transmission. At the time, there was no way of knowing how important this connection would be to the history and discovery of vaccinations. We see this connection now; many present-day vaccinations include animal parts from cows, rabbits, and chicken eggs, which can be attributed to the work of Jenner and his cowpox/smallpox vaccination.", "title": "Zoology" }, { "paragraph_id": 12, "text": "Jenner married Catherine Kingscote (who died in 1815 from tuberculosis) in March 1788. He might have met her while he and other fellows were experimenting with balloons. Jenner's trial balloon descended into Kingscote Park, Gloucestershire, owned by Catherine's father Anthony Kingscote. They had three children: Edward Robert (1789–1810), Robert Fitzharding (1792–1854) and Catherine (1794–1833).", "title": "Marriage and human medicine" }, { "paragraph_id": 13, "text": "He earned his MD from the University of St Andrews in 1792. He is credited with advancing the understanding of angina pectoris. In his correspondence with Heberden, he wrote: \"How much the heart must suffer from the coronary arteries not being able to perform their functions\".", "title": "Marriage and human medicine" }, { "paragraph_id": 14, "text": "Inoculation was already a standard practice in Asian and African medicine but involved serious risks, including the possibility that those inoculated would become contagious and spread the disease to others. In 1721, Lady Mary Wortley Montagu had imported variolation to Britain after having observed it in Istanbul. While Johnnie Notions had great success with his self-devised inoculation (and was reputed not to have lost a single patient), his method's practice was limited to the Shetland Isles. Voltaire wrote that at this time 60% of the population caught smallpox and 20% of the population died from it. Voltaire also states that the Circassians used the inoculation from times immemorial, and the custom may have been borrowed by the Turks from the Circassians. In 1766, Daniel Bernoulli analysed smallpox morbidity and mortality data to demonstrate the efficacy of inoculation.", "title": "Invention of the vaccine" }, { "paragraph_id": 15, "text": "By 1768, English physician John Fewster had realised that prior infection with cowpox rendered a person immune to smallpox. In the years following 1770, at least five investigators in England and Germany (Sevel, Jensen, Jesty 1774, Rendell, Plett 1791) successfully tested in humans a cowpox vaccine against smallpox. For example, Dorset farmer Benjamin Jesty successfully vaccinated and presumably induced immunity with cowpox in his wife and two children during a smallpox epidemic in 1774, but it was not until Jenner's work that the procedure became widely understood. Jenner may have been aware of Jesty's procedures and success. A similar observation was later made in France by Jacques Antoine Rabaut-Pommier in 1780.", "title": "Invention of the vaccine" }, { "paragraph_id": 16, "text": "Jenner postulated that the pus in the blisters that affected individuals affected by cowpox (a disease similar to smallpox, but much less virulent) protected them from smallpox. On 14 May 1796, Jenner tested his hypothesis by inoculating James Phipps, an eight-year-old boy who was the son of Jenner's gardener. He scraped pus from cowpox blisters on the hands of Sarah Nelmes, a milkmaid who had caught cowpox from a cow called Blossom, whose hide now hangs on the wall of the St. George's Medical School library (now in Tooting). Phipps was the 17th case described in Jenner's first paper on vaccination.", "title": "Invention of the vaccine" }, { "paragraph_id": 17, "text": "Jenner inoculated Phipps in both arms that day, subsequently producing in Phipps a fever and some uneasiness, but no full-blown infection. Later, he injected Phipps with variolous material, the routine method of immunization at that time. No disease followed. The boy was later challenged with variolous material and again showed no sign of infection. No unexpected side effects occurred, and neither Phipps nor any other recipients underwent any future 'breakthrough' cases.", "title": "Invention of the vaccine" }, { "paragraph_id": 18, "text": "Jenner's biographer John Baron would later speculate that Jenner understood one could be inoculated against smallpox by being exposed to cowpox by observing the unblemished complexion of milkmaids, rather than building on the work of his predecessors. The milkmaids story is still widely repeated even though it appears to be a myth.", "title": "Invention of the vaccine" }, { "paragraph_id": 19, "text": "Donald Hopkins has written, \"Jenner's unique contribution was not that he inoculated a few persons with cowpox, but that he then proved [by subsequent challenges] that they were immune to smallpox. Moreover, he demonstrated that the protective cowpox pus could be effectively inoculated from person to person, not just directly from cattle.\" Jenner successfully tested his hypothesis on 23 additional subjects.", "title": "Invention of the vaccine" }, { "paragraph_id": 20, "text": "Jenner continued his research and reported it to the Royal Society, which did not publish the initial paper. After revisions and further investigations, he published his findings on the 23 cases, including his 11-month-old son Robert. Some of his conclusions were correct, some erroneous; modern microbiological and microscopic methods would make his studies easier to reproduce. The medical establishment deliberated at length over his findings before accepting them. Eventually, vaccination was accepted, and in 1840, the British government banned variolation – the use of smallpox to induce immunity – and provided vaccination using cowpox free of charge (see Vaccination Act).", "title": "Invention of the vaccine" }, { "paragraph_id": 21, "text": "The success of his discovery soon spread around Europe and was used en masse in the Spanish Balmis Expedition (1803–1806), a three-year-long mission to the Americas, the Philippines, Macao, China, led by Francisco Javier de Balmis with the aim of giving thousands the smallpox vaccine. The expedition was successful, and Jenner wrote: \"I don't imagine the annals of history furnish an example of philanthropy so noble, so extensive as this\". Napoleon, who at the time was at war with Britain, had all his French troops vaccinated, awarded Jenner a medal, and at the request of Jenner, he released two English prisoners of war and permitted their return home. Napoleon remarked he could not \"refuse anything to one of the greatest benefactors of mankind\".", "title": "Invention of the vaccine" }, { "paragraph_id": 22, "text": "Jenner's continuing work on vaccination prevented him from continuing his ordinary medical practice. He was supported by his colleagues and the King in petitioning Parliament, and was granted £10,000 in 1802 for his work on vaccination. In 1807, he was granted another £20,000 after the Royal College of Physicians confirmed the widespread efficacy of vaccination.", "title": "Invention of the vaccine" }, { "paragraph_id": 23, "text": "Jenner was later elected a foreign honorary member of the American Academy of Arts and Sciences in 1802, a member of the American Philosophical Society in 1804, and a foreign member of the Royal Swedish Academy of Sciences in 1806. In 1803 in London, he became president of the Jennerian Society, concerned with promoting vaccination to eradicate smallpox. The Jennerian ceased operations in 1809. Jenner became a member of the Medical and Chirurgical Society on its founding in 1805 (now the Royal Society of Medicine) and presented several papers there. In 1808, with government aid, the National Vaccine Establishment was founded, but Jenner felt dishonoured by the men selected to run it and resigned his directorship.", "title": "Later life" }, { "paragraph_id": 24, "text": "Returning to London in 1811, Jenner observed a significant number of cases of smallpox after vaccination. He found that in these cases the severity of the illness was notably diminished by previous vaccination. In 1821, he was appointed physician extraordinary to King George IV, and was also made mayor of Berkeley and magistrate (justice of the peace). He continued to investigate natural history, and in 1823, the last year of his life, he presented his \"Observations on the Migration of Birds\" to the Royal Society.", "title": "Later life" }, { "paragraph_id": 25, "text": "Jenner was a Freemason.", "title": "Later life" }, { "paragraph_id": 26, "text": "Jenner was found in a state of apoplexy on 25 January 1823, with his right side paralysed. He did not recover and died the next day of an apparent stroke, his second, on 26 January 1823, aged 73. He was buried in the family vault at the Church of St Mary, Berkeley.", "title": "Death" }, { "paragraph_id": 27, "text": "Neither fanatic nor lax, Jenner was a Christian who in his personal correspondence showed himself quite spiritual. Some days before his death, he stated to a friend: \"I am not surprised that men are not grateful to me; but I wonder that they are not grateful to God for the good which He has made me the instrument of conveying to my fellow creatures\".", "title": "Religious views" }, { "paragraph_id": 28, "text": "In 1980, the World Health Organization declared smallpox an eradicated disease. This was the result of coordinated public health efforts, but vaccination was an essential component. Although the disease was declared eradicated, some pus samples still remain in laboratories in Centers for Disease Control and Prevention in Atlanta in the US, and in State Research Center of Virology and Biotechnology VECTOR in Koltsovo, Novosibirsk Oblast, Russia.", "title": "Legacy" }, { "paragraph_id": 29, "text": "Jenner's vaccine laid the foundation for contemporary discoveries in immunology. In 2002, Jenner was named in the BBC's list of the 100 Greatest Britons following a UK-wide vote. Commemorated on postage stamps issued by the Royal Mail, in 1999 he featured in their World Changers issue along with Charles Darwin, Michael Faraday and Alan Turing. The lunar crater Jenner is named in his honour.", "title": "Legacy" } ]
Edward Jenner was an English physician and scientist who pioneered the concept of vaccines and created the smallpox vaccine, the world's first vaccine. The terms vaccine and vaccination are derived from Variolae vaccinae, the term devised by Jenner to denote cowpox. He used it in 1798 in the title of his Inquiry into the Variolae vaccinae known as the Cow Pox, in which he described the protective effect of cowpox against smallpox. In the West, Jenner is often called "the father of immunology", and his work is said to have saved "more lives than any other man". In Jenner's time, smallpox killed around 10% of global population, with the number as high as 20% in towns and cities where infection spread more easily. In 1821, he was appointed physician to King George IV, and was also made mayor of Berkeley and justice of the peace. He was a member of the Royal Society. In the field of zoology, he was among the first modern scholars to describe the brood parasitism of the cuckoo. In 2002, Jenner was named in the BBC's list of the 100 Greatest Britons.
2001-06-15T09:12:22Z
2023-12-13T18:51:51Z
[ "Template:Cite news", "Template:Librivox author", "Template:Short description", "Template:Use British English", "Template:Post-nominals", "Template:Clear left", "Template:Spaced ndash", "Template:Cite web", "Template:Snd", "Template:Reflist", "Template:Cite DNB", "Template:Gutenberg author", "Template:Portal", "Template:Cite journal", "Template:Cite book", "Template:History of infectious disease", "Template:Internet Archive author", "Template:Authority control", "Template:Infobox scientist", "Template:Circa", "Template:Vaccines", "Template:For", "Template:Webarchive", "Template:Colend", "Template:R", "Template:Sidebar", "Template:Cite ODNB", "Template:Sisterlinks", "Template:Use dmy dates", "Template:Coord", "Template:Clear", "Template:ISBN?", "Template:Colbegin" ]
https://en.wikipedia.org/wiki/Edward_Jenner
9,508
Encyclopædia Britannica
The Encyclopædia Britannica (Latin for "British Encyclopædia") is a general knowledge English-language encyclopaedia. It has been published by Encyclopædia Britannica, Inc. since 1768, although the company has changed ownership seven times. The encyclopaedia is maintained by about 100 full-time editors and more than 4,000 contributors. The 2010 version of the 15th edition, which spans 32 volumes and 32,640 pages, was the last printed edition. Since 2016, it has been published exclusively as an online encyclopaedia. Printed for 244 years, the Britannica was the longest-running in-print encyclopaedia in the English language. It was first published between 1768 and 1771 in the Scottish capital of Edinburgh, as three volumes. The encyclopaedia grew in size: the second edition was 10 volumes, and by its fourth edition (1801–1810) it had expanded to 20 volumes. Its rising stature as a scholarly work helped recruit eminent contributors, and the 9th (1875–1889) and 11th editions (1911) are landmark encyclopaedias for scholarship and literary style. Starting with the 11th edition and following its acquisition by an American firm, the Britannica shortened and simplified articles to broaden its appeal to the North American market. In 1933, the Britannica became the first encyclopaedia to adopt "continuous revision", in which the encyclopaedia is continually reprinted, with every article updated on a schedule. In the 21st century, the Britannica has suffered due to competition with the online crowdsourced encyclopaedia Wikipedia, although the Britannica was previously suffering from competition with the digital multimedia encyclopaedia Microsoft Encarta. In March 2012, it announced it would no longer publish printed editions and would focus instead on the online version. Britannica has been assessed to be politically closer to the centre of the US political spectrum than Wikipedia. The 15th edition has a three-part structure: a 12-volume Micropædia of short articles (generally fewer than 750 words), a 17-volume Macropædia of long articles (two to 310 pages), and a single Propædia volume to give a hierarchical outline of knowledge. The Micropædia was meant for quick fact-checking and as a guide to the Macropædia; readers are advised to study the Propædia outline to understand a subject's context and to find more detailed articles. Over 70 years, the size of the Britannica has remained steady, with about 40 million words on half a million topics. Though published in the United States since 1901, the Britannica has for the most part maintained British English spelling. Since 1985, the Britannica had four parts: the Micropædia, the Macropædia, the Propædia, and a two-volume index. The Britannica's articles are found in the Micro- and Macropædia, which encompass 12 and 17 volumes, respectively, each volume having roughly one thousand pages. The 2007 Macropædia has 699 in-depth articles, ranging in length from 2 to 310 pages and having references and named contributors. In contrast, the 2007 Micropædia has roughly 65,000 articles, the vast majority (about 97%) of which contain fewer than 750 words, no references, and no named contributors. The Micropædia articles are intended for quick fact-checking and to help in finding more thorough information in the Macropædia. The Macropædia articles are meant both as authoritative, well-written articles on their subjects and as storehouses of information not covered elsewhere. The longest article (310 pages) is on the United States, and resulted from the merger of the articles on the individual states. A 2013 "Global Edition" of Britannica contained approximately forty thousand articles. Information can be found in the Britannica by following the cross-references in the Micropædia and Macropædia; however, these are sparse, averaging one cross-reference per page. Hence, readers are instead recommended to consult the alphabetical index or the Propædia, which organizes the Britannica's contents by topic. The core of the Propædia is its "Outline of Knowledge", which aims to provide a logical framework for all human knowledge. Accordingly, the Outline is consulted by the Britannica's editors to decide which articles should be included in the Micro- and Macropædia. The Outline is also intended to be a study guide, to put subjects in their proper perspective, and to suggest a series of Britannica articles for the student wishing to learn a topic in depth. However, libraries have found that it is scarcely used, and reviewers have recommended that it be dropped from the encyclopaedia. The Propædia also has color transparencies of human anatomy and several appendices listing the staff members, advisors, and contributors to all three parts of the Britannica. Taken together, the Micropædia and Macropædia comprise roughly 40 million words and 24,000 images. The two-volume index has 2,350 pages, listing the 228,274 topics covered in the Britannica, together with 474,675 subentries under those topics. The Britannica generally prefers British spelling over American; for example, it uses colour (not color), centre (not center), and encyclopaedia (not encyclopedia). However, there are exceptions to this rule, such as defense rather than defence. Common alternative spellings are provided with cross-references such as "Color: see Colour." Since 1936, the articles of the Britannica have been revised on a regular schedule, with at least 10% of them considered for revision each year. According to one Britannica website, 46% of its articles were revised over the past three years; however, according to another Britannica website, only 35% of the articles were revised. The alphabetization of articles in the Micropædia and Macropædia follows strict rules. Diacritical marks and non-English letters are ignored, while numerical entries such as "1812, War of" are alphabetized as if the number had been written out ("Eighteen-twelve, War of"). Articles with identical names are ordered first by persons, then by places, then by things. Rulers with identical names are organized first alphabetically by country and then by chronology; thus, Charles III of France precedes Charles I of England, listed in Britannica as the ruler of Great Britain and Ireland. (That is, they are alphabetized as if their titles were "Charles, France, 3" and "Charles, Great Britain and Ireland, 1".) Similarly, places that share names are organized alphabetically by country, then by ever-smaller political divisions. In March 2012, the company announced that the 2010 edition would be the last printed version. This was announced as a move by the company to adapt to the times and focus on its future using digital distribution. The peak year for the printed encyclopaedia was 1990 when 120,000 sets were sold, but it dropped to 40,000 in 1996. 12,000 sets of the 2010 edition were printed, of which 8,000 had been sold as of 2012. By late April 2012, the remaining copies of the 2010 edition had sold out at Britannica's online store. As of 2016, a replica of Britannica's 1768 first edition is sold on the online store. Britannica Junior was first published in 1934 as 12 volumes. It was expanded to 15 volumes in 1947, and renamed Britannica Junior Encyclopædia in 1963. It was taken off the market after the 1984 printing. A British Children's Britannica edited by John Armitage was issued in London in 1960. Its contents were determined largely by the eleven-plus standardized tests given in Britain. Britannica introduced the Children's Britannica to the US market in 1988, aimed at ages seven to 14. In 1961, a 16-volume Young Children's Encyclopaedia was issued for children just learning to read. My First Britannica is aimed at children ages six to 12, and the Britannica Discovery Library is for children aged three to six (issued 1974 to 1991). There have been, and are, several abridged Britannica encyclopaedias. The single-volume Britannica Concise Encyclopædia has 28,000 short articles condensing the larger 32-volume Britannica; there are authorized translations in languages such as Chinese created by Encyclopedia of China Publishing House and Vietnamese. Compton's by Britannica, first published in 2007, incorporating the former Compton's Encyclopedia, is aimed at 10- to 17-year-olds and consists of 26 volumes and 11,000 pages. Since 1938, Encyclopædia Britannica, Inc. has published annually a Book of the Year covering the past year's events. A given edition of the Book of the Year is named in terms of the year of its publication, though the edition actually covers the events of the previous year. The company also publishes several specialized reference works, such as Shakespeare: The Essential Guide to the Life and Works of the Bard (Wiley, 2006). The Britannica Ultimate Reference Suite 2012 DVD contains over 100,000 articles. This includes regular Britannica articles, as well as others drawn from the Britannica Student Encyclopædia, and the Britannica Elementary Encyclopædia. The package includes a range of supplementary content including maps, videos, sound clips, animations and web links. It also offers study tools and dictionary and thesaurus entries from Merriam-Webster. Britannica Online is a website with more than 120,000 articles and is updated regularly. It has daily features, updates and links to news reports from The New York Times and the BBC. As of 2009, roughly 60% of Encyclopædia Britannica's revenue came from online operations, of which around 15% came from subscriptions to the consumer version of the websites. As of 2006, subscriptions were available on a yearly, monthly or weekly basis. Special subscription plans are offered to schools, colleges and libraries; such institutional subscribers constitute an important part of Britannica's business. Beginning in early 2007, the Britannica made articles freely available if they are hyperlinked from an external site. Non-subscribers are served pop-ups and advertising. On 20 February 2007, Encyclopædia Britannica, Incorporated announced that it was working with mobile phone search company AskMeNow to launch a mobile encyclopaedia. Users will be able to send a question via text message, and AskMeNow will search Britannica's 28,000-article concise encyclopaedia to return an answer to the query. Daily topical features sent directly to users' mobile phones are also planned. On 3 June 2008, an initiative to facilitate collaboration between online expert and amateur scholarly contributors for Britannica's online content (in the spirit of a wiki), with editorial oversight from Britannica staff, was announced. Approved contributions would be credited, though contributing automatically grants Encyclopædia Britannica, Incorporated perpetual, irrevocable license to those contributions. On 22 January 2009, Britannica's president, Jorge Cauz, announced that the company would be accepting edits and additions to the online Britannica website from the public. The published edition of the encyclopaedia will not be affected by the changes. Individuals wishing to edit the Britannica website will have to register under their real name and address prior to editing or submitting their content. All edits submitted will be reviewed and checked and will have to be approved by the encyclopaedia's professional staff. Contributions from non-academic users will sit in a separate section from the expert-generated Britannica content, as will content submitted by non-Britannica scholars. Articles written by users, if vetted and approved, will also only be available in a special section of the website, separate from the professional articles. Official Britannica material would carry a "Britannica Checked" stamp, to distinguish it from the user-generated content. On 14 September 2010, Encyclopædia Britannica, Inc. announced a partnership with mobile phone development company Concentric Sky to launch a series of iPhone products aimed at the K–12 market. On 20 July 2011, Encyclopædia Britannica, Incorporated announced that Concentric Sky had ported the Britannica Kids product line to Intel's Intel Atom-based Netbooks and on 26 October 2011 that it had launched its encyclopaedia as an iPad app. In 2010, Britannica released Britannica ImageQuest, a database of images. In March 2012, it was announced that the company would cease printing the encyclopaedia set, and that it would focus more on its online version. On 7 June 2018, Britannica released a Google Chrome extension, "Britannica Insights", which shows snippets of information from Britannica Online whenever the user performs a Google Search, in a box to the right of Google's results. Britannica Insights was also available as a Firefox extension but this was taken down due to a code review issue. The print version of the Britannica has 4,411 contributors, many eminent in their fields, such as Nobel laureate economist Milton Friedman, astronomer Carl Sagan, and surgeon Michael DeBakey. Roughly a quarter of the contributors are deceased, some as long ago as 1947 (Alfred North Whitehead), while another quarter are retired or emeritus. Most (approximately 98%) contribute to only a single article; however, 64 contributed to three articles, 23 contributed to four articles, 10 contributed to five articles, and 8 contributed to more than five articles. An exceptionally prolific contributor is Christine Sutton of the University of Oxford, who contributed 24 articles on particle physics. While Britannica's authors have included writers such as Albert Einstein, Marie Curie, and Leon Trotsky, as well as notable independent encyclopaedists such as Isaac Asimov, some have been criticized for lack of expertise. In 1911 the historian George L. Burr wrote: With a temerity almost appalling, [the Britannica contributor, Mr. Philips] ranges over nearly the whole field of European history, political, social, ecclesiastical... The grievance is that [this work] lacks authority. This, too—this reliance on editorial energy instead of on ripe special learning—may, alas, be also counted an "Americanizing": for certainly nothing has so cheapened the scholarship of our American encyclopaedias. As of 2007 in the 15th edition of Britannica, Dale Hoiberg, a sinologist, was listed as Britannica's Senior Vice President and editor-in-chief. Among his predecessors as editors-in-chief were Hugh Chisholm (1902–1924), James Louis Garvin (1926–1932), Franklin Henry Hooper (1932–1938), Walter Yust (1938–1960), Harry Ashmore (1960–1963), Warren E. Preece (1964–1968, 1969–1975), Sir William Haley (1968–1969), Philip W. Goetz (1979–1991), and Robert McHenry (1992–1997). As of 2007 Anita Wolff was listed as the Deputy Editor and Theodore Pappas as Executive Editor. Prior Executive Editors include John V. Dodge (1950–1964) and Philip W. Goetz. Paul T. Armstrong remains the longest working employee of Encyclopædia Britannica. He began his career there in 1934, eventually earning the positions of treasurer, vice president, and chief financial officer in his 58 years with the company, before retiring in 1992. The 2007 editorial staff of the Britannica included five Senior Editors and nine Associate Editors, supervised by Dale Hoiberg and four others. The editorial staff helped to write the articles of the Micropædia and some sections of the Macropædia. The Britannica has an editorial board of advisors, which includes 12 distinguished scholars: non-fiction author Nicholas Carr, religion scholar Wendy Doniger, political economist Benjamin M. Friedman, Council on Foreign Relations President Emeritus Leslie H. Gelb, computer scientist David Gelernter, Physics Nobel laureate Murray Gell-Mann, Carnegie Corporation of New York President Vartan Gregorian, philosopher Thomas Nagel, cognitive scientist Donald Norman, musicologist Don Michael Randel, Stewart Sutherland, Baron Sutherland of Houndwood, President of the Royal Society of Edinburgh, and cultural anthropologist Michael Wesch. The Propædia and its Outline of Knowledge were produced by dozens of editorial advisors under the direction of Mortimer J. Adler. Roughly half of these advisors have since died, including some of the Outline's chief architects – Rene Dubos (d. 1982), Loren Eiseley (d. 1977), Harold D. Lasswell (d. 1978), Mark Van Doren (d. 1972), Peter Ritchie Calder (d. 1982) and Mortimer J. Adler (d. 2001). The Propædia also lists just under 4,000 advisors who were consulted for the unsigned Micropædia articles. In January 1996, the Britannica was purchased from the Benton Foundation by billionaire Swiss financier Jacqui Safra, who serves as its current chair of the board. In 1997, Don Yannias, a long-time associate and investment advisor of Safra, became CEO of Encyclopædia Britannica, Incorporated. In 1999, a new company, Britannica.com Incorporated, was created to develop digital versions of the Britannica; Yannias assumed the role of CEO in the new company, while his former position at the parent company remained vacant for two years. Yannias' tenure at Britannica.com Incorporated was marked by missteps, considerable lay-offs, and financial losses. In 2001, Yannias was replaced by Ilan Yeshua, who reunited the leadership of the two companies. Yannias later returned to investment management, but remains on the Britannica's Board of Directors. In 2003, former management consultant Jorge Aguilar-Cauz was appointed President of Encyclopædia Britannica, Incorporated. Cauz is the senior executive and reports directly to the Britannica's Board of Directors. Cauz has been pursuing alliances with other companies and extending the Britannica brand to new educational and reference products, continuing the strategy pioneered by former CEO Elkan Harrison Powell in the mid-1930s. In the fall of 2017, Karthik Krishnan was appointed global chief executive officer of the Encyclopædia Britannica Group. Krishnan brought a varied perspective to the role based on several high-level positions in digital media, including RELX (formerly known as Reed Elsevier, and one of the constituents of the FTSE 100 Index) and Rodale, in which he was responsible for "driving business and cultural transformation and accelerating growth". Taking the reins of the company as it was preparing to mark its 250th anniversary and define the next phase of its digital strategy for consumers and K–12 schools, Krishnan launched a series of new initiatives in his first year. First was Britannica Insights, a free, downloadable software extension to the Google Chrome browser that served up edited, fact-checked Britannica information with queries on search engines such as Google, Yahoo, and Bing. Its purpose, the company said, was to "provide trusted, verified information" in conjunction with search results that were thought to be increasingly unreliable in the era of misinformation and "fake news." The product was quickly followed by Britannica School Insights, which provided similar content for subscribers to Britannica's online classroom solutions, and a partnership with YouTube in which verified Britannica content appeared on the site as an antidote to user-generated video content that could be false or misleading. Krishnan, n educator at New York University's Stern School of Business, believes in the "transformative power of education" and set steering the company toward solidifying its place among leaders in educational technology and supplemental curriculum. Krishnan aimed at providing more useful and relevant solutions to customer needs, extending and renewing Britannica's historical emphasis on "utility", which had been the watchword of its first edition in 1768. As the Britannica is a general encyclopaedia, it does not seek to compete with specialized encyclopaedias such as the Encyclopaedia of Mathematics or the Dictionary of the Middle Ages, which can devote much more space to their chosen topics. In its first years, the Britannica's main competitor was the general encyclopaedia of Ephraim Chambers and, soon thereafter, Rees's Cyclopædia and Coleridge's Encyclopædia Metropolitana. In the 20th century, successful competitors included Collier's Encyclopedia, the Encyclopedia Americana, and the World Book Encyclopedia. Nevertheless, from the 9th edition onwards, the Britannica was widely considered to have the greatest authority of any general English-language encyclopaedia, especially because of its broad coverage and eminent authors. The print version of the Britannica was significantly more expensive than its competitors. Since the early 1990s, the Britannica has faced new challenges from digital information sources. The Internet, facilitated by the development of Web search engines, has grown into a common source of information for many people, and provides easy access to reliable original sources and expert opinions, thanks in part to initiatives such as Google Books, MIT's release of its educational materials and the open PubMed Central library of the National Library of Medicine. The Internet tends to provide more current coverage than print media, due to the ease with which material on the Internet can be updated. In rapidly changing fields such as science, technology, politics, culture and modern history, the Britannica has struggled to stay up to date, a problem first analysed systematically by its former editor Walter Yust. Eventually, the Britannica turned to focus more on its online edition. The Encyclopædia Britannica has been compared with other print encyclopaedias, both qualitatively and quantitatively. A well-known comparison is that of Kenneth Kister, who gave a qualitative and quantitative comparison of the 1993 Britannica with two comparable encyclopaedias, Collier's Encyclopedia and the Encyclopedia Americana. For the quantitative analysis, ten articles were selected at random—circumcision, Charles Drew, Galileo, Philip Glass, heart disease, IQ, panda bear, sexual harassment, Shroud of Turin and Uzbekistan—and letter grades of A–D or F were awarded in four categories: coverage, accuracy, clarity, and recency. In all four categories and for all three encyclopaedias, the four average grades fell between B− and B+, chiefly because none of the encyclopaedias had an article on sexual harassment in 1994. In the accuracy category, the Britannica received one "D" and seven "A"s, Encyclopedia Americana received eight "A"s, and Collier's received one "D" and seven "A"s; thus, Britannica received an average score of 92% for accuracy to Americana's 95% and Collier's 92%. In the timeliness category, Britannica averaged an 86% to Americana's 90% and Collier's 85%. In 2013, the President of Encyclopædia Britannica announced that after 244 years, the encyclopaedia would cease print production and all future editions would be entirely digital. The most notable competitor of the Britannica among CD/DVD-ROM digital encyclopaedias was Encarta, now discontinued, a modern multimedia encyclopaedia that incorporated three print encyclopaedias: Funk & Wagnalls, Collier's and the New Merit Scholar's Encyclopedia. Encarta was the top-selling multimedia encyclopaedia, based on total US retail sales from January 2000 to February 2006. Both occupied the same price range, with the 2007 Encyclopædia Britannica Ultimate CD or DVD costing US$40–50 and the Microsoft Encarta Premium 2007 DVD costing US$45. The Britannica disc contains 100,000 articles and Merriam-Webster's Dictionary and Thesaurus (US only), and offers Primary and Secondary School editions. Encarta contained 66,000 articles, a user-friendly Visual Browser, interactive maps, math, language and homework tools, a US and UK dictionary, and a youth edition. Like Encarta, the digital Britannica has been criticized for being biased towards United States audiences; the United Kingdom-related articles are updated less often, maps of the United States are more detailed than those of other countries, and it lacks a UK dictionary. Like the Britannica, Encarta was available online by subscription, although some content could be accessed free. The main online alternative to Britannica is Wikipedia. The key differences between the two lie in accessibility; the model of participation they bring to an encyclopedic project; their respective style sheets and editorial policies; relative ages; the number of subjects treated; the number of languages in which articles are written and made available; and their underlying economic models: unlike Britannica, Wikipedia is a not-for-profit and is not connected with traditional profit- and contract-based publishing distribution networks. The 699 printed Macropædia articles are generally written by identified contributors, and the roughly 65,000 printed Micropædia articles are the work of the editorial staff and identified outside consultants. Thus, a Britannica article either has known authorship or a set of possible authors (the editorial staff). With the exception of the editorial staff, most of the Britannica's contributors are experts in their field—some are Nobel laureates. By contrast, the articles of Wikipedia are written by people of unknown degrees of expertise: most do not claim any particular expertise, and of those who do, many are anonymous and have no verifiable credentials. It is for this lack of institutional vetting, or certification, that former Britannica editor-in-chief Robert McHenry notes his belief that Wikipedia cannot hope to rival the Britannica in accuracy. In 2005, the journal Nature chose articles from both websites in a wide range of science topics and sent them to what it called "relevant" field experts for peer review. The experts then compared the competing articles—one from each site on a given topic—side by side but were not told which article came from which site. Nature got back 42 usable reviews. The journal found just eight serious errors, such as general misunderstandings of vital concepts: four from each site. It also discovered many factual errors, omissions or misleading statements: 162 in Wikipedia and 123 in Britannica, an average of 3.86 mistakes per article for Wikipedia and 2.92 for Britannica. Although Britannica was revealed as the more accurate encyclopaedia, with fewer errors, Encyclopædia Britannica, Incorporated in its rebuttal called Nature's study flawed and misleading and called for a "prompt" retraction. It noted that two of the articles in the study were taken from a Britannica yearbook and not the encyclopaedia, and another two were from Compton's Encyclopedia (called the Britannica Student Encyclopedia on the company's website). Nature defended its story and declined to retract, stating that, as it was comparing Wikipedia with the web version of Britannica, it used whatever relevant material was available on Britannica's website. Interviewed in February 2009, the managing director of Britannica UK said: Wikipedia is a fun site to use and has a lot of interesting entries on there, but their approach wouldn't work for Encyclopædia Britannica. My job is to create more awareness of our very different approaches to publishing in the public mind. They're a chisel, we're a drill, and you need to have the correct tool for the job. In a January 2016 press release, Britannica called Wikipedia "an impressive achievement." Since the 3rd edition, the Britannica has enjoyed a popular and critical reputation for general excellence. The 3rd and the 9th editions were pirated for sale in the United States, beginning with Dobson's Encyclopaedia. On the release of the 14th edition, Time magazine dubbed the Britannica the "Patriarch of the Library". In a related advertisement, naturalist William Beebe was quoted as saying that the Britannica was "beyond comparison because there is no competitor." References to the Britannica can be found throughout English literature, most notably in one of Sir Arthur Conan Doyle's favourite Sherlock Holmes stories, "The Red-Headed League". The tale was highlighted by the Lord Mayor of London, Gilbert Inglefield, at the bicentennial of the Britannica. The Britannica has a reputation for summarising knowledge. To further their education, some people have devoted themselves to reading the entire Britannica, taking anywhere from three to 22 years to do so. When Fat'h Ali became the Shah of Persia in 1797, he was given a set of the Britannica's 3rd edition, which he read completely; after this feat, he extended his royal title to include "Most Formidable Lord and Master of the Encyclopædia Britannica". Writer George Bernard Shaw claimed to have read the complete 9th edition, except for the science articlesand Richard Evelyn Byrd took the Britannica as reading material for his five-month stay at the South Pole in 1934, while Philip Beaver read it during a sailing expedition. More recently, A.J. Jacobs, an editor at Esquire magazine, read the entire 2002 version of the 15th edition, describing his experiences in the well-received 2004 book, The Know-It-All: One Man's Humble Quest to Become the Smartest Person in the World. Only two people are known to have read two independent editions: the author C. S. Forester and Amos Urban Shirk, an American businessman who read the 11th and 14th editions, devoting roughly three hours per night for four and a half years to read the 11th. The CD/DVD-ROM version of the Britannica, Encyclopædia Britannica Ultimate Reference Suite, received the 2004 Distinguished Achievement Award from the Association of Educational Publishers. On 15 July 2009, Encyclopædia Britannica was awarded a spot as one of "Top Ten Superbrands in the UK" by a panel of more than 2,000 independent reviewers, as reported by the BBC. Topics are chosen in part by reference to the Propædia "Outline of Knowledge". The bulk of the Britannica is devoted to geography (26% of the Macropædia), biography (14%), biology and medicine (11%), literature (7%), physics and astronomy (6%), religion (5%), art (4%), Western philosophy (4%), and law (3%). A complementary study of the Micropædia found that geography accounted for 25% of articles, science 18%, social sciences 17%, biography 17%, and all other humanities 25%. Writing in 1992, one reviewer judged that the "range, depth, and catholicity of coverage [of the Britannica] are unsurpassed by any other general Encyclopaedia." The Britannica does not cover topics in equivalent detail; for example, the whole of Buddhism and most other religions is covered in a single Macropædia article, whereas 14 articles are devoted to Christianity, comprising nearly half of all religion articles. The Britannica covers 50,479 biographies, 5,999 of them about women, with 11.87% being British citizens and 25.51% US citizens. However, the Britannica has been lauded as the least biased of general Encyclopaedias marketed to Western readers and praised for its biographies of important women of all eras. It can be stated without fear of contradiction that the 15th edition of the Britannica accords non-Western cultural, social, and scientific developments more notice than any general English-language encyclopedia currently on the market. On rare occasions, the Britannica has been criticized for its editorial choices. Given its roughly constant size, the encyclopaedia has needed to reduce or eliminate some topics to accommodate others, resulting in controversial decisions. The initial 15th edition (1974–1985) was faulted for having reduced or eliminated coverage of children's literature, military decorations, and the French poet Joachim du Bellay; editorial mistakes were also alleged, such as inconsistent sorting of Japanese biographies. Its elimination of the index was condemned, as was the apparently arbitrary division of articles into the Micropædia and Macropædia. Summing up, one critic called the initial 15th edition a "qualified failure...[that] cares more for juggling its format than for preserving." More recently, reviewers from the American Library Association were surprised to find that most educational articles had been eliminated from the 1992 Macropædia, along with the article on psychology. Some very few Britannica-appointed contributors are mistaken. A notorious instance from the Britannica's early years is the rejection of Newtonian gravity by George Gleig, the chief editor of the 3rd edition (1788–1797), who wrote that gravity was caused by the classical element of fire. The Britannica has also staunchly defended a scientific approach to cultural topics, as it did with William Robertson Smith's articles on religion in the 9th edition, particularly his article stating that the Bible was not historically accurate (1875). The Britannica has received criticism, especially as editions become outdated. It is expensive to produce a completely new edition of the Britannica, and its editors delay for as long as fiscally sensible (usually about 25 years). For example, despite continuous revision, the 14th edition became outdated after 35 years (1929–1964). When American physicist Harvey Einbinder detailed its failings in his 1964 book, The Myth of the Britannica, the encyclopaedia was provoked to produce the 15th edition, which required 10 years of work. It is still difficult to keep the Britannica current; one 1994 critic writes, "it is not difficult to find articles that are out-of-date or in need of revision", noting that the longer Macropædia articles are more likely to be outdated than the shorter Micropædia articles. Information in the Micropædia is sometimes inconsistent with the corresponding Macropædia article(s), mainly because of the failure to update one or the other. The bibliographies of the Macropædia articles have been criticized for being more out-of-date than the articles themselves. In 2005, 12-year-old schoolboy Lucian George found several inaccuracies in the Britannica's entries on Poland and wildlife in Eastern Europe. In 2010, an inaccurate entry about the Irish Civil War, which incorrectly described the war as having been between the north and south of Ireland, was discussed in the Irish press following a decision of the Department of Education and Science to pay for online access. Writing about the 3rd edition (1788–1797), Britannica's chief editor George Gleig observed that "perfection seems to be incompatible with the nature of works constructed on such a plan, and embracing such a variety of subjects." In March 2006, the Britannica wrote, "we in no way mean to imply that Britannica is error-free; we have never made such a claim" (although in 1962 Britannica's sales department famously said of the 14th edition "It is truth. It is unquestionable fact.") The sentiment is expressed by its original editor, William Smellie: With regard to errors in general, whether falling under the denomination of mental, typographical or accidental, we are conscious of being able to point out a greater number than any critic whatever. Men who are acquainted with the innumerable difficulties attending the execution of a work of such an extensive nature will make proper allowances. To these we appeal, and shall rest satisfied with the judgment they pronounce. Past owners have included, in chronological order, the Edinburgh, Scotland-based printers Colin Macfarquhar and Andrew Bell, Scottish bookseller Archibald Constable, Scottish publisher A & C Black, Horace Everett Hooper, Sears Roebuck and William Benton. The present owner of Encyclopædia Britannica Inc. is Jacqui Safra, a Brazilian billionaire and actor. Recent advances in information technology and the rise of electronic encyclopaedias such as Encyclopædia Britannica Ultimate Reference Suite, Encarta and Wikipedia have reduced the demand for print encyclopaedias. To remain competitive, Encyclopædia Britannica, Inc. has stressed the reputation of the Britannica, reduced its price and production costs, and developed electronic versions on CD-ROM, DVD, and the World Wide Web. Since the early 1930s, the company has promoted spin-off reference works. The Britannica has been issued in 15 editions, with multi-volume supplements to the 3rd and 4th editions (see the Table below). The 5th and 6th editions were reprints of the 4th, and the 10th edition was only a supplement to the 9th, just as the 12th and 13th editions were supplements to the 11th. The 15th underwent massive reorganization in 1985, but the updated, current version is still known as the 15th. The 14th and 15th editions were edited every year throughout their runs, so that later printings of each were entirely different from early ones. Throughout history, the Britannica has had two aims: to be an excellent reference book, and to provide educational material. In 1974, the 15th edition adopted a third goal: to systematize all human knowledge. The history of the Britannica can be divided into five eras, punctuated by changes in management, or reorganization of the dictionary. In the first era (1st–6th editions, 1768–1826), the Britannica was managed and published by its founders, Colin Macfarquhar and Andrew Bell, by Archibald Constable, and by others. The Britannica was first published between December 1768 and 1771 in Edinburgh as the Encyclopædia Britannica, or, A Dictionary of Arts and Sciences, compiled upon a New Plan. In part, it was conceived in reaction to the French Encyclopédie of Denis Diderot and Jean le Rond d'Alembert (published 1751–1772), which had been inspired by Chambers's Cyclopaedia (first edition 1728). It went on sale 10 December. The Britannica of this period was primarily a Scottish enterprise, and it is one of the most enduring legacies of the Scottish Enlightenment. In this era, the Britannica moved from being a three-volume set (1st edition) compiled by one young editor—William Smellie—to a 20-volume set written by numerous authorities. Several other encyclopaedias competed throughout this period, among them editions of Abraham Rees's Cyclopædia and Coleridge's Encyclopædia Metropolitana and David Brewster's Edinburgh Encyclopædia. During the second era (7th–9th editions, 1827–1901), the Britannica was managed by the Edinburgh publishing firm A & C Black. Although some contributors were again recruited through friendships of the chief editors, notably Macvey Napier, others were attracted by the Britannica's reputation. The contributors often came from other countries and included the world's most respected authorities in their fields. A general index of all articles was included for the first time in the 7th edition, a practice maintained until 1974. Production of the 9th edition was overseen by Thomas Spencer Baynes, the first English-born editor-in-chief. Dubbed the "Scholar's Edition", the 9th edition is the most scholarly of all Britannicas. After 1880, Baynes was assisted by William Robertson Smith. No biographies of living persons were included. James Clerk Maxwell and Thomas Huxley were special advisors on science. However, by the close of the 19th century, the 9th edition was outdated, and the Britannica faced financial difficulties. In the third era (10th–14th editions, 1901–1973), the Britannica was managed by American businessmen who introduced direct marketing and door-to-door sales. The American owners gradually simplified articles, making them less scholarly for a mass market. The 10th edition was an eleven-volume supplement (including one each of maps and an index) to the 9th, numbered as volumes 25–35, but the 11th edition was a completely new work, and is still praised for excellence; its owner, Horace Hooper, lavished enormous effort on its perfection. When Hooper fell into financial difficulties, the Britannica was managed by Sears Roebuck for 18 years (1920–1923, 1928–1943). In 1932, the vice-president of Sears, Elkan Harrison Powell, assumed presidency of the Britannica; in 1936, he began the policy of continuous revision. This was a departure from earlier practice, in which the articles were not changed until a new edition was produced, at roughly 25-year intervals, some articles unchanged from earlier editions. Powell developed new educational products that built upon the Britannica's reputation. In 1943, Sears donated the Encyclopædia Britannica to the University of Chicago. William Benton, then a vice president of the university, provided the working capital for its operation. The stock was divided between Benton and the university, with the university holding an option on the stock. Benton became chairman of the board and managed the Britannica until his death in 1973. Benton set up the Benton Foundation, which managed the Britannica until 1996, and whose sole beneficiary was the University of Chicago. In 1968, the Britannica celebrated its bicentennial. In the fourth era (1974–1994), the Britannica introduced its 15th edition, which was reorganized into three parts: the Micropædia, the Macropædia, and the Propædia. Under Mortimer J. Adler (member of the Board of Editors of Encyclopædia Britannica since its inception in 1949, and its chair from 1974; director of editorial planning for the 15th edition of Britannica from 1965), the Britannica sought not only to be a good reference work and educational tool, but to systematize all human knowledge. The absence of a separate index and the grouping of articles into parallel encyclopaedias (the Micro- and Macropædia) provoked a "firestorm of criticism" of the initial 15th edition. In response, the 15th edition was completely reorganized and indexed for a re-release in 1985. This second version of the 15th edition continued to be published and revised through the release of the 2010 print version. The official title of the 15th edition is the New Encyclopædia Britannica, although it has also been promoted as Britannica 3. On 9 March 1976 the US Federal Trade Commission entered an opinion and order enjoining Encyclopædia Britannica, Inc. from using: a) deceptive advertising practices in recruiting sales agents and obtaining sales leads, and b) deceptive sales practices in the door-to-door presentations of its sales agents. In the fifth era (1994–present), digital versions have been developed and released on optical media and online. In 1996, the Britannica was bought by Jacqui Safra at well below its estimated value, owing to the company's financial difficulties. Encyclopædia Britannica, Incorporated split in 1999. One part retained the company name and developed the print version, and the other, Britannica.com Incorporated, developed digital versions. Since 2001, the two companies have shared a CEO, Ilan Yeshua, who has continued Powell's strategy of introducing new products with the Britannica name. In March 2012, Britannica's president, Jorge Cauz, announced that it would not produce any new print editions of the encyclopaedia, with the 2010 15th edition being the last. The company will focus only on the online edition and other educational tools. Britannica's final print edition was in 2010, a 32-volume set. Britannica Global Edition was also printed in 2010, containing 30 volumes and 18,251 pages, with 8,500 photographs, maps, flags, and illustrations in smaller "compact" volumes, as well as over 40,000 articles written by scholars from across the world, including Nobel Prize winners. Unlike the 15th edition, it did not contain Macro- and Micropædia sections, but ran A through Z as all editions up through the 14th had. The following is Britannica's description of the work: The editors of Encyclopædia Britannica, the world standard in reference since 1768, present the Britannica Global Edition. Developed specifically to provide comprehensive and global coverage of the world around us, this unique product contains thousands of timely, relevant, and essential articles drawn from the Encyclopædia Britannica itself, as well as from the Britannica Concise Encyclopedia, the Britannica Encyclopedia of World Religions, and Compton's by Britannica. Written by international experts and scholars, the articles in this collection reflect the standards that have been the hallmark of the leading English-language encyclopedia for over 240 years. In 2020, Encyclopædia Britannica, Inc. released the Britannica All New Children's Encyclopedia: What We Know and What We Don't, an encyclopaedia aimed primarily at younger readers, covering major topics. The encyclopedia was widely praised for bringing back the print format. It was Britannica's first encyclopaedia for children since 1984. The Britannica was dedicated to the reigning British monarch from 1788 to 1901 and then, upon its sale to an American partnership, to the British monarch and the President of the United States. Thus, the 11th edition is "dedicated by Permission to His Majesty George the Fifth, King of Great Britain and Ireland and of the British Dominions beyond the Seas, Emperor of India, and to William Howard Taft, President of the United States of America." The order of the dedications has changed with the relative power of the United States and Britain, and with relative sales; the 1954 version of the 14th edition is "Dedicated by Permission to the Heads of the Two English-Speaking Peoples, Dwight David Eisenhower, President of the United States of America, and Her Majesty, Queen Elizabeth the Second." Consistent with this tradition, the 2007 version of the current 15th edition was "dedicated by permission to the current President of the United States of America, George W. Bush, and Her Majesty, Queen Elizabeth II", while the 2010 version of the current 15th edition is "dedicated by permission to Barack Obama, President of the United States of America, and Her Majesty Queen Elizabeth II."
[ { "paragraph_id": 0, "text": "The Encyclopædia Britannica (Latin for \"British Encyclopædia\") is a general knowledge English-language encyclopaedia. It has been published by Encyclopædia Britannica, Inc. since 1768, although the company has changed ownership seven times. The encyclopaedia is maintained by about 100 full-time editors and more than 4,000 contributors. The 2010 version of the 15th edition, which spans 32 volumes and 32,640 pages, was the last printed edition. Since 2016, it has been published exclusively as an online encyclopaedia.", "title": "" }, { "paragraph_id": 1, "text": "Printed for 244 years, the Britannica was the longest-running in-print encyclopaedia in the English language. It was first published between 1768 and 1771 in the Scottish capital of Edinburgh, as three volumes. The encyclopaedia grew in size: the second edition was 10 volumes, and by its fourth edition (1801–1810) it had expanded to 20 volumes. Its rising stature as a scholarly work helped recruit eminent contributors, and the 9th (1875–1889) and 11th editions (1911) are landmark encyclopaedias for scholarship and literary style. Starting with the 11th edition and following its acquisition by an American firm, the Britannica shortened and simplified articles to broaden its appeal to the North American market.", "title": "" }, { "paragraph_id": 2, "text": "In 1933, the Britannica became the first encyclopaedia to adopt \"continuous revision\", in which the encyclopaedia is continually reprinted, with every article updated on a schedule. In the 21st century, the Britannica has suffered due to competition with the online crowdsourced encyclopaedia Wikipedia, although the Britannica was previously suffering from competition with the digital multimedia encyclopaedia Microsoft Encarta.", "title": "" }, { "paragraph_id": 3, "text": "In March 2012, it announced it would no longer publish printed editions and would focus instead on the online version. Britannica has been assessed to be politically closer to the centre of the US political spectrum than Wikipedia.", "title": "" }, { "paragraph_id": 4, "text": "The 15th edition has a three-part structure: a 12-volume Micropædia of short articles (generally fewer than 750 words), a 17-volume Macropædia of long articles (two to 310 pages), and a single Propædia volume to give a hierarchical outline of knowledge. The Micropædia was meant for quick fact-checking and as a guide to the Macropædia; readers are advised to study the Propædia outline to understand a subject's context and to find more detailed articles. Over 70 years, the size of the Britannica has remained steady, with about 40 million words on half a million topics. Though published in the United States since 1901, the Britannica has for the most part maintained British English spelling.", "title": "" }, { "paragraph_id": 5, "text": "Since 1985, the Britannica had four parts: the Micropædia, the Macropædia, the Propædia, and a two-volume index. The Britannica's articles are found in the Micro- and Macropædia, which encompass 12 and 17 volumes, respectively, each volume having roughly one thousand pages. The 2007 Macropædia has 699 in-depth articles, ranging in length from 2 to 310 pages and having references and named contributors. In contrast, the 2007 Micropædia has roughly 65,000 articles, the vast majority (about 97%) of which contain fewer than 750 words, no references, and no named contributors. The Micropædia articles are intended for quick fact-checking and to help in finding more thorough information in the Macropædia. The Macropædia articles are meant both as authoritative, well-written articles on their subjects and as storehouses of information not covered elsewhere. The longest article (310 pages) is on the United States, and resulted from the merger of the articles on the individual states. A 2013 \"Global Edition\" of Britannica contained approximately forty thousand articles.", "title": "Present status" }, { "paragraph_id": 6, "text": "Information can be found in the Britannica by following the cross-references in the Micropædia and Macropædia; however, these are sparse, averaging one cross-reference per page. Hence, readers are instead recommended to consult the alphabetical index or the Propædia, which organizes the Britannica's contents by topic.", "title": "Present status" }, { "paragraph_id": 7, "text": "The core of the Propædia is its \"Outline of Knowledge\", which aims to provide a logical framework for all human knowledge. Accordingly, the Outline is consulted by the Britannica's editors to decide which articles should be included in the Micro- and Macropædia. The Outline is also intended to be a study guide, to put subjects in their proper perspective, and to suggest a series of Britannica articles for the student wishing to learn a topic in depth. However, libraries have found that it is scarcely used, and reviewers have recommended that it be dropped from the encyclopaedia. The Propædia also has color transparencies of human anatomy and several appendices listing the staff members, advisors, and contributors to all three parts of the Britannica.", "title": "Present status" }, { "paragraph_id": 8, "text": "Taken together, the Micropædia and Macropædia comprise roughly 40 million words and 24,000 images. The two-volume index has 2,350 pages, listing the 228,274 topics covered in the Britannica, together with 474,675 subentries under those topics. The Britannica generally prefers British spelling over American; for example, it uses colour (not color), centre (not center), and encyclopaedia (not encyclopedia). However, there are exceptions to this rule, such as defense rather than defence. Common alternative spellings are provided with cross-references such as \"Color: see Colour.\"", "title": "Present status" }, { "paragraph_id": 9, "text": "Since 1936, the articles of the Britannica have been revised on a regular schedule, with at least 10% of them considered for revision each year. According to one Britannica website, 46% of its articles were revised over the past three years; however, according to another Britannica website, only 35% of the articles were revised.", "title": "Present status" }, { "paragraph_id": 10, "text": "The alphabetization of articles in the Micropædia and Macropædia follows strict rules. Diacritical marks and non-English letters are ignored, while numerical entries such as \"1812, War of\" are alphabetized as if the number had been written out (\"Eighteen-twelve, War of\"). Articles with identical names are ordered first by persons, then by places, then by things. Rulers with identical names are organized first alphabetically by country and then by chronology; thus, Charles III of France precedes Charles I of England, listed in Britannica as the ruler of Great Britain and Ireland. (That is, they are alphabetized as if their titles were \"Charles, France, 3\" and \"Charles, Great Britain and Ireland, 1\".) Similarly, places that share names are organized alphabetically by country, then by ever-smaller political divisions.", "title": "Present status" }, { "paragraph_id": 11, "text": "In March 2012, the company announced that the 2010 edition would be the last printed version. This was announced as a move by the company to adapt to the times and focus on its future using digital distribution. The peak year for the printed encyclopaedia was 1990 when 120,000 sets were sold, but it dropped to 40,000 in 1996. 12,000 sets of the 2010 edition were printed, of which 8,000 had been sold as of 2012. By late April 2012, the remaining copies of the 2010 edition had sold out at Britannica's online store. As of 2016, a replica of Britannica's 1768 first edition is sold on the online store.", "title": "Present status" }, { "paragraph_id": 12, "text": "Britannica Junior was first published in 1934 as 12 volumes. It was expanded to 15 volumes in 1947, and renamed Britannica Junior Encyclopædia in 1963. It was taken off the market after the 1984 printing.", "title": "Present status" }, { "paragraph_id": 13, "text": "A British Children's Britannica edited by John Armitage was issued in London in 1960. Its contents were determined largely by the eleven-plus standardized tests given in Britain. Britannica introduced the Children's Britannica to the US market in 1988, aimed at ages seven to 14.", "title": "Present status" }, { "paragraph_id": 14, "text": "In 1961, a 16-volume Young Children's Encyclopaedia was issued for children just learning to read.", "title": "Present status" }, { "paragraph_id": 15, "text": "My First Britannica is aimed at children ages six to 12, and the Britannica Discovery Library is for children aged three to six (issued 1974 to 1991).", "title": "Present status" }, { "paragraph_id": 16, "text": "There have been, and are, several abridged Britannica encyclopaedias. The single-volume Britannica Concise Encyclopædia has 28,000 short articles condensing the larger 32-volume Britannica; there are authorized translations in languages such as Chinese created by Encyclopedia of China Publishing House and Vietnamese.", "title": "Present status" }, { "paragraph_id": 17, "text": "Compton's by Britannica, first published in 2007, incorporating the former Compton's Encyclopedia, is aimed at 10- to 17-year-olds and consists of 26 volumes and 11,000 pages.", "title": "Present status" }, { "paragraph_id": 18, "text": "Since 1938, Encyclopædia Britannica, Inc. has published annually a Book of the Year covering the past year's events. A given edition of the Book of the Year is named in terms of the year of its publication, though the edition actually covers the events of the previous year. The company also publishes several specialized reference works, such as Shakespeare: The Essential Guide to the Life and Works of the Bard (Wiley, 2006).", "title": "Present status" }, { "paragraph_id": 19, "text": "The Britannica Ultimate Reference Suite 2012 DVD contains over 100,000 articles. This includes regular Britannica articles, as well as others drawn from the Britannica Student Encyclopædia, and the Britannica Elementary Encyclopædia. The package includes a range of supplementary content including maps, videos, sound clips, animations and web links. It also offers study tools and dictionary and thesaurus entries from Merriam-Webster.", "title": "Present status" }, { "paragraph_id": 20, "text": "Britannica Online is a website with more than 120,000 articles and is updated regularly. It has daily features, updates and links to news reports from The New York Times and the BBC. As of 2009, roughly 60% of Encyclopædia Britannica's revenue came from online operations, of which around 15% came from subscriptions to the consumer version of the websites. As of 2006, subscriptions were available on a yearly, monthly or weekly basis. Special subscription plans are offered to schools, colleges and libraries; such institutional subscribers constitute an important part of Britannica's business. Beginning in early 2007, the Britannica made articles freely available if they are hyperlinked from an external site. Non-subscribers are served pop-ups and advertising.", "title": "Present status" }, { "paragraph_id": 21, "text": "On 20 February 2007, Encyclopædia Britannica, Incorporated announced that it was working with mobile phone search company AskMeNow to launch a mobile encyclopaedia. Users will be able to send a question via text message, and AskMeNow will search Britannica's 28,000-article concise encyclopaedia to return an answer to the query. Daily topical features sent directly to users' mobile phones are also planned.", "title": "Present status" }, { "paragraph_id": 22, "text": "On 3 June 2008, an initiative to facilitate collaboration between online expert and amateur scholarly contributors for Britannica's online content (in the spirit of a wiki), with editorial oversight from Britannica staff, was announced. Approved contributions would be credited, though contributing automatically grants Encyclopædia Britannica, Incorporated perpetual, irrevocable license to those contributions.", "title": "Present status" }, { "paragraph_id": 23, "text": "On 22 January 2009, Britannica's president, Jorge Cauz, announced that the company would be accepting edits and additions to the online Britannica website from the public. The published edition of the encyclopaedia will not be affected by the changes. Individuals wishing to edit the Britannica website will have to register under their real name and address prior to editing or submitting their content. All edits submitted will be reviewed and checked and will have to be approved by the encyclopaedia's professional staff. Contributions from non-academic users will sit in a separate section from the expert-generated Britannica content, as will content submitted by non-Britannica scholars. Articles written by users, if vetted and approved, will also only be available in a special section of the website, separate from the professional articles. Official Britannica material would carry a \"Britannica Checked\" stamp, to distinguish it from the user-generated content.", "title": "Present status" }, { "paragraph_id": 24, "text": "On 14 September 2010, Encyclopædia Britannica, Inc. announced a partnership with mobile phone development company Concentric Sky to launch a series of iPhone products aimed at the K–12 market. On 20 July 2011, Encyclopædia Britannica, Incorporated announced that Concentric Sky had ported the Britannica Kids product line to Intel's Intel Atom-based Netbooks and on 26 October 2011 that it had launched its encyclopaedia as an iPad app. In 2010, Britannica released Britannica ImageQuest, a database of images.", "title": "Present status" }, { "paragraph_id": 25, "text": "In March 2012, it was announced that the company would cease printing the encyclopaedia set, and that it would focus more on its online version.", "title": "Present status" }, { "paragraph_id": 26, "text": "On 7 June 2018, Britannica released a Google Chrome extension, \"Britannica Insights\", which shows snippets of information from Britannica Online whenever the user performs a Google Search, in a box to the right of Google's results. Britannica Insights was also available as a Firefox extension but this was taken down due to a code review issue.", "title": "Present status" }, { "paragraph_id": 27, "text": "The print version of the Britannica has 4,411 contributors, many eminent in their fields, such as Nobel laureate economist Milton Friedman, astronomer Carl Sagan, and surgeon Michael DeBakey. Roughly a quarter of the contributors are deceased, some as long ago as 1947 (Alfred North Whitehead), while another quarter are retired or emeritus. Most (approximately 98%) contribute to only a single article; however, 64 contributed to three articles, 23 contributed to four articles, 10 contributed to five articles, and 8 contributed to more than five articles. An exceptionally prolific contributor is Christine Sutton of the University of Oxford, who contributed 24 articles on particle physics.", "title": "Personnel and management" }, { "paragraph_id": 28, "text": "While Britannica's authors have included writers such as Albert Einstein, Marie Curie, and Leon Trotsky, as well as notable independent encyclopaedists such as Isaac Asimov, some have been criticized for lack of expertise. In 1911 the historian George L. Burr wrote:", "title": "Personnel and management" }, { "paragraph_id": 29, "text": "With a temerity almost appalling, [the Britannica contributor, Mr. Philips] ranges over nearly the whole field of European history, political, social, ecclesiastical... The grievance is that [this work] lacks authority. This, too—this reliance on editorial energy instead of on ripe special learning—may, alas, be also counted an \"Americanizing\": for certainly nothing has so cheapened the scholarship of our American encyclopaedias.", "title": "Personnel and management" }, { "paragraph_id": 30, "text": "As of 2007 in the 15th edition of Britannica, Dale Hoiberg, a sinologist, was listed as Britannica's Senior Vice President and editor-in-chief. Among his predecessors as editors-in-chief were Hugh Chisholm (1902–1924), James Louis Garvin (1926–1932), Franklin Henry Hooper (1932–1938), Walter Yust (1938–1960), Harry Ashmore (1960–1963), Warren E. Preece (1964–1968, 1969–1975), Sir William Haley (1968–1969), Philip W. Goetz (1979–1991), and Robert McHenry (1992–1997). As of 2007 Anita Wolff was listed as the Deputy Editor and Theodore Pappas as Executive Editor. Prior Executive Editors include John V. Dodge (1950–1964) and Philip W. Goetz.", "title": "Personnel and management" }, { "paragraph_id": 31, "text": "Paul T. Armstrong remains the longest working employee of Encyclopædia Britannica. He began his career there in 1934, eventually earning the positions of treasurer, vice president, and chief financial officer in his 58 years with the company, before retiring in 1992.", "title": "Personnel and management" }, { "paragraph_id": 32, "text": "The 2007 editorial staff of the Britannica included five Senior Editors and nine Associate Editors, supervised by Dale Hoiberg and four others. The editorial staff helped to write the articles of the Micropædia and some sections of the Macropædia.", "title": "Personnel and management" }, { "paragraph_id": 33, "text": "The Britannica has an editorial board of advisors, which includes 12 distinguished scholars: non-fiction author Nicholas Carr, religion scholar Wendy Doniger, political economist Benjamin M. Friedman, Council on Foreign Relations President Emeritus Leslie H. Gelb, computer scientist David Gelernter, Physics Nobel laureate Murray Gell-Mann, Carnegie Corporation of New York President Vartan Gregorian, philosopher Thomas Nagel, cognitive scientist Donald Norman, musicologist Don Michael Randel, Stewart Sutherland, Baron Sutherland of Houndwood, President of the Royal Society of Edinburgh, and cultural anthropologist Michael Wesch.", "title": "Personnel and management" }, { "paragraph_id": 34, "text": "The Propædia and its Outline of Knowledge were produced by dozens of editorial advisors under the direction of Mortimer J. Adler. Roughly half of these advisors have since died, including some of the Outline's chief architects – Rene Dubos (d. 1982), Loren Eiseley (d. 1977), Harold D. Lasswell (d. 1978), Mark Van Doren (d. 1972), Peter Ritchie Calder (d. 1982) and Mortimer J. Adler (d. 2001). The Propædia also lists just under 4,000 advisors who were consulted for the unsigned Micropædia articles.", "title": "Personnel and management" }, { "paragraph_id": 35, "text": "In January 1996, the Britannica was purchased from the Benton Foundation by billionaire Swiss financier Jacqui Safra, who serves as its current chair of the board. In 1997, Don Yannias, a long-time associate and investment advisor of Safra, became CEO of Encyclopædia Britannica, Incorporated.", "title": "Personnel and management" }, { "paragraph_id": 36, "text": "In 1999, a new company, Britannica.com Incorporated, was created to develop digital versions of the Britannica; Yannias assumed the role of CEO in the new company, while his former position at the parent company remained vacant for two years. Yannias' tenure at Britannica.com Incorporated was marked by missteps, considerable lay-offs, and financial losses. In 2001, Yannias was replaced by Ilan Yeshua, who reunited the leadership of the two companies. Yannias later returned to investment management, but remains on the Britannica's Board of Directors.", "title": "Personnel and management" }, { "paragraph_id": 37, "text": "In 2003, former management consultant Jorge Aguilar-Cauz was appointed President of Encyclopædia Britannica, Incorporated. Cauz is the senior executive and reports directly to the Britannica's Board of Directors. Cauz has been pursuing alliances with other companies and extending the Britannica brand to new educational and reference products, continuing the strategy pioneered by former CEO Elkan Harrison Powell in the mid-1930s.", "title": "Personnel and management" }, { "paragraph_id": 38, "text": "In the fall of 2017, Karthik Krishnan was appointed global chief executive officer of the Encyclopædia Britannica Group. Krishnan brought a varied perspective to the role based on several high-level positions in digital media, including RELX (formerly known as Reed Elsevier, and one of the constituents of the FTSE 100 Index) and Rodale, in which he was responsible for \"driving business and cultural transformation and accelerating growth\".", "title": "Personnel and management" }, { "paragraph_id": 39, "text": "Taking the reins of the company as it was preparing to mark its 250th anniversary and define the next phase of its digital strategy for consumers and K–12 schools, Krishnan launched a series of new initiatives in his first year.", "title": "Personnel and management" }, { "paragraph_id": 40, "text": "First was Britannica Insights, a free, downloadable software extension to the Google Chrome browser that served up edited, fact-checked Britannica information with queries on search engines such as Google, Yahoo, and Bing. Its purpose, the company said, was to \"provide trusted, verified information\" in conjunction with search results that were thought to be increasingly unreliable in the era of misinformation and \"fake news.\"", "title": "Personnel and management" }, { "paragraph_id": 41, "text": "The product was quickly followed by Britannica School Insights, which provided similar content for subscribers to Britannica's online classroom solutions, and a partnership with YouTube in which verified Britannica content appeared on the site as an antidote to user-generated video content that could be false or misleading.", "title": "Personnel and management" }, { "paragraph_id": 42, "text": "Krishnan, n educator at New York University's Stern School of Business, believes in the \"transformative power of education\" and set steering the company toward solidifying its place among leaders in educational technology and supplemental curriculum. Krishnan aimed at providing more useful and relevant solutions to customer needs, extending and renewing Britannica's historical emphasis on \"utility\", which had been the watchword of its first edition in 1768.", "title": "Personnel and management" }, { "paragraph_id": 43, "text": "As the Britannica is a general encyclopaedia, it does not seek to compete with specialized encyclopaedias such as the Encyclopaedia of Mathematics or the Dictionary of the Middle Ages, which can devote much more space to their chosen topics. In its first years, the Britannica's main competitor was the general encyclopaedia of Ephraim Chambers and, soon thereafter, Rees's Cyclopædia and Coleridge's Encyclopædia Metropolitana. In the 20th century, successful competitors included Collier's Encyclopedia, the Encyclopedia Americana, and the World Book Encyclopedia. Nevertheless, from the 9th edition onwards, the Britannica was widely considered to have the greatest authority of any general English-language encyclopaedia, especially because of its broad coverage and eminent authors. The print version of the Britannica was significantly more expensive than its competitors.", "title": "Competition" }, { "paragraph_id": 44, "text": "Since the early 1990s, the Britannica has faced new challenges from digital information sources. The Internet, facilitated by the development of Web search engines, has grown into a common source of information for many people, and provides easy access to reliable original sources and expert opinions, thanks in part to initiatives such as Google Books, MIT's release of its educational materials and the open PubMed Central library of the National Library of Medicine.", "title": "Competition" }, { "paragraph_id": 45, "text": "The Internet tends to provide more current coverage than print media, due to the ease with which material on the Internet can be updated. In rapidly changing fields such as science, technology, politics, culture and modern history, the Britannica has struggled to stay up to date, a problem first analysed systematically by its former editor Walter Yust. Eventually, the Britannica turned to focus more on its online edition.", "title": "Competition" }, { "paragraph_id": 46, "text": "The Encyclopædia Britannica has been compared with other print encyclopaedias, both qualitatively and quantitatively. A well-known comparison is that of Kenneth Kister, who gave a qualitative and quantitative comparison of the 1993 Britannica with two comparable encyclopaedias, Collier's Encyclopedia and the Encyclopedia Americana. For the quantitative analysis, ten articles were selected at random—circumcision, Charles Drew, Galileo, Philip Glass, heart disease, IQ, panda bear, sexual harassment, Shroud of Turin and Uzbekistan—and letter grades of A–D or F were awarded in four categories: coverage, accuracy, clarity, and recency. In all four categories and for all three encyclopaedias, the four average grades fell between B− and B+, chiefly because none of the encyclopaedias had an article on sexual harassment in 1994. In the accuracy category, the Britannica received one \"D\" and seven \"A\"s, Encyclopedia Americana received eight \"A\"s, and Collier's received one \"D\" and seven \"A\"s; thus, Britannica received an average score of 92% for accuracy to Americana's 95% and Collier's 92%. In the timeliness category, Britannica averaged an 86% to Americana's 90% and Collier's 85%.", "title": "Competition" }, { "paragraph_id": 47, "text": "In 2013, the President of Encyclopædia Britannica announced that after 244 years, the encyclopaedia would cease print production and all future editions would be entirely digital.", "title": "Competition" }, { "paragraph_id": 48, "text": "The most notable competitor of the Britannica among CD/DVD-ROM digital encyclopaedias was Encarta, now discontinued, a modern multimedia encyclopaedia that incorporated three print encyclopaedias: Funk & Wagnalls, Collier's and the New Merit Scholar's Encyclopedia. Encarta was the top-selling multimedia encyclopaedia, based on total US retail sales from January 2000 to February 2006. Both occupied the same price range, with the 2007 Encyclopædia Britannica Ultimate CD or DVD costing US$40–50 and the Microsoft Encarta Premium 2007 DVD costing US$45.", "title": "Competition" }, { "paragraph_id": 49, "text": "The Britannica disc contains 100,000 articles and Merriam-Webster's Dictionary and Thesaurus (US only), and offers Primary and Secondary School editions. Encarta contained 66,000 articles, a user-friendly Visual Browser, interactive maps, math, language and homework tools, a US and UK dictionary, and a youth edition. Like Encarta, the digital Britannica has been criticized for being biased towards United States audiences; the United Kingdom-related articles are updated less often, maps of the United States are more detailed than those of other countries, and it lacks a UK dictionary. Like the Britannica, Encarta was available online by subscription, although some content could be accessed free.", "title": "Competition" }, { "paragraph_id": 50, "text": "The main online alternative to Britannica is Wikipedia. The key differences between the two lie in accessibility; the model of participation they bring to an encyclopedic project; their respective style sheets and editorial policies; relative ages; the number of subjects treated; the number of languages in which articles are written and made available; and their underlying economic models: unlike Britannica, Wikipedia is a not-for-profit and is not connected with traditional profit- and contract-based publishing distribution networks.", "title": "Competition" }, { "paragraph_id": 51, "text": "The 699 printed Macropædia articles are generally written by identified contributors, and the roughly 65,000 printed Micropædia articles are the work of the editorial staff and identified outside consultants. Thus, a Britannica article either has known authorship or a set of possible authors (the editorial staff). With the exception of the editorial staff, most of the Britannica's contributors are experts in their field—some are Nobel laureates. By contrast, the articles of Wikipedia are written by people of unknown degrees of expertise: most do not claim any particular expertise, and of those who do, many are anonymous and have no verifiable credentials. It is for this lack of institutional vetting, or certification, that former Britannica editor-in-chief Robert McHenry notes his belief that Wikipedia cannot hope to rival the Britannica in accuracy.", "title": "Competition" }, { "paragraph_id": 52, "text": "In 2005, the journal Nature chose articles from both websites in a wide range of science topics and sent them to what it called \"relevant\" field experts for peer review. The experts then compared the competing articles—one from each site on a given topic—side by side but were not told which article came from which site. Nature got back 42 usable reviews.", "title": "Competition" }, { "paragraph_id": 53, "text": "The journal found just eight serious errors, such as general misunderstandings of vital concepts: four from each site. It also discovered many factual errors, omissions or misleading statements: 162 in Wikipedia and 123 in Britannica, an average of 3.86 mistakes per article for Wikipedia and 2.92 for Britannica.", "title": "Competition" }, { "paragraph_id": 54, "text": "Although Britannica was revealed as the more accurate encyclopaedia, with fewer errors, Encyclopædia Britannica, Incorporated in its rebuttal called Nature's study flawed and misleading and called for a \"prompt\" retraction. It noted that two of the articles in the study were taken from a Britannica yearbook and not the encyclopaedia, and another two were from Compton's Encyclopedia (called the Britannica Student Encyclopedia on the company's website).", "title": "Competition" }, { "paragraph_id": 55, "text": "Nature defended its story and declined to retract, stating that, as it was comparing Wikipedia with the web version of Britannica, it used whatever relevant material was available on Britannica's website. Interviewed in February 2009, the managing director of Britannica UK said:", "title": "Competition" }, { "paragraph_id": 56, "text": "Wikipedia is a fun site to use and has a lot of interesting entries on there, but their approach wouldn't work for Encyclopædia Britannica. My job is to create more awareness of our very different approaches to publishing in the public mind. They're a chisel, we're a drill, and you need to have the correct tool for the job.", "title": "Competition" }, { "paragraph_id": 57, "text": "In a January 2016 press release, Britannica called Wikipedia \"an impressive achievement.\"", "title": "Competition" }, { "paragraph_id": 58, "text": "Since the 3rd edition, the Britannica has enjoyed a popular and critical reputation for general excellence. The 3rd and the 9th editions were pirated for sale in the United States, beginning with Dobson's Encyclopaedia. On the release of the 14th edition, Time magazine dubbed the Britannica the \"Patriarch of the Library\". In a related advertisement, naturalist William Beebe was quoted as saying that the Britannica was \"beyond comparison because there is no competitor.\" References to the Britannica can be found throughout English literature, most notably in one of Sir Arthur Conan Doyle's favourite Sherlock Holmes stories, \"The Red-Headed League\". The tale was highlighted by the Lord Mayor of London, Gilbert Inglefield, at the bicentennial of the Britannica.", "title": "Critical and popular assessments" }, { "paragraph_id": 59, "text": "The Britannica has a reputation for summarising knowledge. To further their education, some people have devoted themselves to reading the entire Britannica, taking anywhere from three to 22 years to do so. When Fat'h Ali became the Shah of Persia in 1797, he was given a set of the Britannica's 3rd edition, which he read completely; after this feat, he extended his royal title to include \"Most Formidable Lord and Master of the Encyclopædia Britannica\".", "title": "Critical and popular assessments" }, { "paragraph_id": 60, "text": "Writer George Bernard Shaw claimed to have read the complete 9th edition, except for the science articlesand Richard Evelyn Byrd took the Britannica as reading material for his five-month stay at the South Pole in 1934, while Philip Beaver read it during a sailing expedition. More recently, A.J. Jacobs, an editor at Esquire magazine, read the entire 2002 version of the 15th edition, describing his experiences in the well-received 2004 book, The Know-It-All: One Man's Humble Quest to Become the Smartest Person in the World. Only two people are known to have read two independent editions: the author C. S. Forester and Amos Urban Shirk, an American businessman who read the 11th and 14th editions, devoting roughly three hours per night for four and a half years to read the 11th.", "title": "Critical and popular assessments" }, { "paragraph_id": 61, "text": "The CD/DVD-ROM version of the Britannica, Encyclopædia Britannica Ultimate Reference Suite, received the 2004 Distinguished Achievement Award from the Association of Educational Publishers. On 15 July 2009, Encyclopædia Britannica was awarded a spot as one of \"Top Ten Superbrands in the UK\" by a panel of more than 2,000 independent reviewers, as reported by the BBC.", "title": "Critical and popular assessments" }, { "paragraph_id": 62, "text": "Topics are chosen in part by reference to the Propædia \"Outline of Knowledge\". The bulk of the Britannica is devoted to geography (26% of the Macropædia), biography (14%), biology and medicine (11%), literature (7%), physics and astronomy (6%), religion (5%), art (4%), Western philosophy (4%), and law (3%). A complementary study of the Micropædia found that geography accounted for 25% of articles, science 18%, social sciences 17%, biography 17%, and all other humanities 25%. Writing in 1992, one reviewer judged that the \"range, depth, and catholicity of coverage [of the Britannica] are unsurpassed by any other general Encyclopaedia.\"", "title": "Critical and popular assessments" }, { "paragraph_id": 63, "text": "The Britannica does not cover topics in equivalent detail; for example, the whole of Buddhism and most other religions is covered in a single Macropædia article, whereas 14 articles are devoted to Christianity, comprising nearly half of all religion articles. The Britannica covers 50,479 biographies, 5,999 of them about women, with 11.87% being British citizens and 25.51% US citizens. However, the Britannica has been lauded as the least biased of general Encyclopaedias marketed to Western readers and praised for its biographies of important women of all eras.", "title": "Critical and popular assessments" }, { "paragraph_id": 64, "text": "It can be stated without fear of contradiction that the 15th edition of the Britannica accords non-Western cultural, social, and scientific developments more notice than any general English-language encyclopedia currently on the market.", "title": "Critical and popular assessments" }, { "paragraph_id": 65, "text": "On rare occasions, the Britannica has been criticized for its editorial choices. Given its roughly constant size, the encyclopaedia has needed to reduce or eliminate some topics to accommodate others, resulting in controversial decisions. The initial 15th edition (1974–1985) was faulted for having reduced or eliminated coverage of children's literature, military decorations, and the French poet Joachim du Bellay; editorial mistakes were also alleged, such as inconsistent sorting of Japanese biographies. Its elimination of the index was condemned, as was the apparently arbitrary division of articles into the Micropædia and Macropædia. Summing up, one critic called the initial 15th edition a \"qualified failure...[that] cares more for juggling its format than for preserving.\" More recently, reviewers from the American Library Association were surprised to find that most educational articles had been eliminated from the 1992 Macropædia, along with the article on psychology.", "title": "Critical and popular assessments" }, { "paragraph_id": 66, "text": "Some very few Britannica-appointed contributors are mistaken. A notorious instance from the Britannica's early years is the rejection of Newtonian gravity by George Gleig, the chief editor of the 3rd edition (1788–1797), who wrote that gravity was caused by the classical element of fire. The Britannica has also staunchly defended a scientific approach to cultural topics, as it did with William Robertson Smith's articles on religion in the 9th edition, particularly his article stating that the Bible was not historically accurate (1875).", "title": "Critical and popular assessments" }, { "paragraph_id": 67, "text": "The Britannica has received criticism, especially as editions become outdated. It is expensive to produce a completely new edition of the Britannica, and its editors delay for as long as fiscally sensible (usually about 25 years). For example, despite continuous revision, the 14th edition became outdated after 35 years (1929–1964). When American physicist Harvey Einbinder detailed its failings in his 1964 book, The Myth of the Britannica, the encyclopaedia was provoked to produce the 15th edition, which required 10 years of work. It is still difficult to keep the Britannica current; one 1994 critic writes, \"it is not difficult to find articles that are out-of-date or in need of revision\", noting that the longer Macropædia articles are more likely to be outdated than the shorter Micropædia articles. Information in the Micropædia is sometimes inconsistent with the corresponding Macropædia article(s), mainly because of the failure to update one or the other. The bibliographies of the Macropædia articles have been criticized for being more out-of-date than the articles themselves.", "title": "Critical and popular assessments" }, { "paragraph_id": 68, "text": "In 2005, 12-year-old schoolboy Lucian George found several inaccuracies in the Britannica's entries on Poland and wildlife in Eastern Europe.", "title": "Critical and popular assessments" }, { "paragraph_id": 69, "text": "In 2010, an inaccurate entry about the Irish Civil War, which incorrectly described the war as having been between the north and south of Ireland, was discussed in the Irish press following a decision of the Department of Education and Science to pay for online access.", "title": "Critical and popular assessments" }, { "paragraph_id": 70, "text": "Writing about the 3rd edition (1788–1797), Britannica's chief editor George Gleig observed that \"perfection seems to be incompatible with the nature of works constructed on such a plan, and embracing such a variety of subjects.\" In March 2006, the Britannica wrote, \"we in no way mean to imply that Britannica is error-free; we have never made such a claim\" (although in 1962 Britannica's sales department famously said of the 14th edition \"It is truth. It is unquestionable fact.\") The sentiment is expressed by its original editor, William Smellie:", "title": "Critical and popular assessments" }, { "paragraph_id": 71, "text": "With regard to errors in general, whether falling under the denomination of mental, typographical or accidental, we are conscious of being able to point out a greater number than any critic whatever. Men who are acquainted with the innumerable difficulties attending the execution of a work of such an extensive nature will make proper allowances. To these we appeal, and shall rest satisfied with the judgment they pronounce.", "title": "Critical and popular assessments" }, { "paragraph_id": 72, "text": "Past owners have included, in chronological order, the Edinburgh, Scotland-based printers Colin Macfarquhar and Andrew Bell, Scottish bookseller Archibald Constable, Scottish publisher A & C Black, Horace Everett Hooper, Sears Roebuck and William Benton.", "title": "History" }, { "paragraph_id": 73, "text": "The present owner of Encyclopædia Britannica Inc. is Jacqui Safra, a Brazilian billionaire and actor. Recent advances in information technology and the rise of electronic encyclopaedias such as Encyclopædia Britannica Ultimate Reference Suite, Encarta and Wikipedia have reduced the demand for print encyclopaedias. To remain competitive, Encyclopædia Britannica, Inc. has stressed the reputation of the Britannica, reduced its price and production costs, and developed electronic versions on CD-ROM, DVD, and the World Wide Web. Since the early 1930s, the company has promoted spin-off reference works.", "title": "History" }, { "paragraph_id": 74, "text": "The Britannica has been issued in 15 editions, with multi-volume supplements to the 3rd and 4th editions (see the Table below). The 5th and 6th editions were reprints of the 4th, and the 10th edition was only a supplement to the 9th, just as the 12th and 13th editions were supplements to the 11th. The 15th underwent massive reorganization in 1985, but the updated, current version is still known as the 15th. The 14th and 15th editions were edited every year throughout their runs, so that later printings of each were entirely different from early ones.", "title": "History" }, { "paragraph_id": 75, "text": "Throughout history, the Britannica has had two aims: to be an excellent reference book, and to provide educational material. In 1974, the 15th edition adopted a third goal: to systematize all human knowledge. The history of the Britannica can be divided into five eras, punctuated by changes in management, or reorganization of the dictionary.", "title": "History" }, { "paragraph_id": 76, "text": "In the first era (1st–6th editions, 1768–1826), the Britannica was managed and published by its founders, Colin Macfarquhar and Andrew Bell, by Archibald Constable, and by others. The Britannica was first published between December 1768 and 1771 in Edinburgh as the Encyclopædia Britannica, or, A Dictionary of Arts and Sciences, compiled upon a New Plan. In part, it was conceived in reaction to the French Encyclopédie of Denis Diderot and Jean le Rond d'Alembert (published 1751–1772), which had been inspired by Chambers's Cyclopaedia (first edition 1728). It went on sale 10 December.", "title": "History" }, { "paragraph_id": 77, "text": "The Britannica of this period was primarily a Scottish enterprise, and it is one of the most enduring legacies of the Scottish Enlightenment. In this era, the Britannica moved from being a three-volume set (1st edition) compiled by one young editor—William Smellie—to a 20-volume set written by numerous authorities. Several other encyclopaedias competed throughout this period, among them editions of Abraham Rees's Cyclopædia and Coleridge's Encyclopædia Metropolitana and David Brewster's Edinburgh Encyclopædia.", "title": "History" }, { "paragraph_id": 78, "text": "During the second era (7th–9th editions, 1827–1901), the Britannica was managed by the Edinburgh publishing firm A & C Black. Although some contributors were again recruited through friendships of the chief editors, notably Macvey Napier, others were attracted by the Britannica's reputation. The contributors often came from other countries and included the world's most respected authorities in their fields. A general index of all articles was included for the first time in the 7th edition, a practice maintained until 1974.", "title": "History" }, { "paragraph_id": 79, "text": "Production of the 9th edition was overseen by Thomas Spencer Baynes, the first English-born editor-in-chief. Dubbed the \"Scholar's Edition\", the 9th edition is the most scholarly of all Britannicas. After 1880, Baynes was assisted by William Robertson Smith. No biographies of living persons were included. James Clerk Maxwell and Thomas Huxley were special advisors on science. However, by the close of the 19th century, the 9th edition was outdated, and the Britannica faced financial difficulties.", "title": "History" }, { "paragraph_id": 80, "text": "In the third era (10th–14th editions, 1901–1973), the Britannica was managed by American businessmen who introduced direct marketing and door-to-door sales. The American owners gradually simplified articles, making them less scholarly for a mass market. The 10th edition was an eleven-volume supplement (including one each of maps and an index) to the 9th, numbered as volumes 25–35, but the 11th edition was a completely new work, and is still praised for excellence; its owner, Horace Hooper, lavished enormous effort on its perfection.", "title": "History" }, { "paragraph_id": 81, "text": "When Hooper fell into financial difficulties, the Britannica was managed by Sears Roebuck for 18 years (1920–1923, 1928–1943). In 1932, the vice-president of Sears, Elkan Harrison Powell, assumed presidency of the Britannica; in 1936, he began the policy of continuous revision. This was a departure from earlier practice, in which the articles were not changed until a new edition was produced, at roughly 25-year intervals, some articles unchanged from earlier editions. Powell developed new educational products that built upon the Britannica's reputation.", "title": "History" }, { "paragraph_id": 82, "text": "In 1943, Sears donated the Encyclopædia Britannica to the University of Chicago. William Benton, then a vice president of the university, provided the working capital for its operation. The stock was divided between Benton and the university, with the university holding an option on the stock. Benton became chairman of the board and managed the Britannica until his death in 1973. Benton set up the Benton Foundation, which managed the Britannica until 1996, and whose sole beneficiary was the University of Chicago. In 1968, the Britannica celebrated its bicentennial.", "title": "History" }, { "paragraph_id": 83, "text": "In the fourth era (1974–1994), the Britannica introduced its 15th edition, which was reorganized into three parts: the Micropædia, the Macropædia, and the Propædia. Under Mortimer J. Adler (member of the Board of Editors of Encyclopædia Britannica since its inception in 1949, and its chair from 1974; director of editorial planning for the 15th edition of Britannica from 1965), the Britannica sought not only to be a good reference work and educational tool, but to systematize all human knowledge. The absence of a separate index and the grouping of articles into parallel encyclopaedias (the Micro- and Macropædia) provoked a \"firestorm of criticism\" of the initial 15th edition. In response, the 15th edition was completely reorganized and indexed for a re-release in 1985. This second version of the 15th edition continued to be published and revised through the release of the 2010 print version. The official title of the 15th edition is the New Encyclopædia Britannica, although it has also been promoted as Britannica 3.", "title": "History" }, { "paragraph_id": 84, "text": "On 9 March 1976 the US Federal Trade Commission entered an opinion and order enjoining Encyclopædia Britannica, Inc. from using: a) deceptive advertising practices in recruiting sales agents and obtaining sales leads, and b) deceptive sales practices in the door-to-door presentations of its sales agents.", "title": "History" }, { "paragraph_id": 85, "text": "In the fifth era (1994–present), digital versions have been developed and released on optical media and online.", "title": "History" }, { "paragraph_id": 86, "text": "In 1996, the Britannica was bought by Jacqui Safra at well below its estimated value, owing to the company's financial difficulties. Encyclopædia Britannica, Incorporated split in 1999. One part retained the company name and developed the print version, and the other, Britannica.com Incorporated, developed digital versions. Since 2001, the two companies have shared a CEO, Ilan Yeshua, who has continued Powell's strategy of introducing new products with the Britannica name. In March 2012, Britannica's president, Jorge Cauz, announced that it would not produce any new print editions of the encyclopaedia, with the 2010 15th edition being the last. The company will focus only on the online edition and other educational tools.", "title": "History" }, { "paragraph_id": 87, "text": "Britannica's final print edition was in 2010, a 32-volume set. Britannica Global Edition was also printed in 2010, containing 30 volumes and 18,251 pages, with 8,500 photographs, maps, flags, and illustrations in smaller \"compact\" volumes, as well as over 40,000 articles written by scholars from across the world, including Nobel Prize winners. Unlike the 15th edition, it did not contain Macro- and Micropædia sections, but ran A through Z as all editions up through the 14th had. The following is Britannica's description of the work:", "title": "History" }, { "paragraph_id": 88, "text": "The editors of Encyclopædia Britannica, the world standard in reference since 1768, present the Britannica Global Edition. Developed specifically to provide comprehensive and global coverage of the world around us, this unique product contains thousands of timely, relevant, and essential articles drawn from the Encyclopædia Britannica itself, as well as from the Britannica Concise Encyclopedia, the Britannica Encyclopedia of World Religions, and Compton's by Britannica. Written by international experts and scholars, the articles in this collection reflect the standards that have been the hallmark of the leading English-language encyclopedia for over 240 years.", "title": "History" }, { "paragraph_id": 89, "text": "In 2020, Encyclopædia Britannica, Inc. released the Britannica All New Children's Encyclopedia: What We Know and What We Don't, an encyclopaedia aimed primarily at younger readers, covering major topics. The encyclopedia was widely praised for bringing back the print format. It was Britannica's first encyclopaedia for children since 1984.", "title": "History" }, { "paragraph_id": 90, "text": "The Britannica was dedicated to the reigning British monarch from 1788 to 1901 and then, upon its sale to an American partnership, to the British monarch and the President of the United States. Thus, the 11th edition is \"dedicated by Permission to His Majesty George the Fifth, King of Great Britain and Ireland and of the British Dominions beyond the Seas, Emperor of India, and to William Howard Taft, President of the United States of America.\" The order of the dedications has changed with the relative power of the United States and Britain, and with relative sales; the 1954 version of the 14th edition is \"Dedicated by Permission to the Heads of the Two English-Speaking Peoples, Dwight David Eisenhower, President of the United States of America, and Her Majesty, Queen Elizabeth the Second.\"", "title": "History" }, { "paragraph_id": 91, "text": "Consistent with this tradition, the 2007 version of the current 15th edition was \"dedicated by permission to the current President of the United States of America, George W. Bush, and Her Majesty, Queen Elizabeth II\", while the 2010 version of the current 15th edition is \"dedicated by permission to Barack Obama, President of the United States of America, and Her Majesty Queen Elizabeth II.\"", "title": "History" } ]
The Encyclopædia Britannica is a general knowledge English-language encyclopaedia. It has been published by Encyclopædia Britannica, Inc. since 1768, although the company has changed ownership seven times. The encyclopaedia is maintained by about 100 full-time editors and more than 4,000 contributors. The 2010 version of the 15th edition, which spans 32 volumes and 32,640 pages, was the last printed edition. Since 2016, it has been published exclusively as an online encyclopaedia. Printed for 244 years, the Britannica was the longest-running in-print encyclopaedia in the English language. It was first published between 1768 and 1771 in the Scottish capital of Edinburgh, as three volumes. The encyclopaedia grew in size: the second edition was 10 volumes, and by its fourth edition (1801–1810) it had expanded to 20 volumes. Its rising stature as a scholarly work helped recruit eminent contributors, and the 9th (1875–1889) and 11th editions (1911) are landmark encyclopaedias for scholarship and literary style. Starting with the 11th edition and following its acquisition by an American firm, the Britannica shortened and simplified articles to broaden its appeal to the North American market. In 1933, the Britannica became the first encyclopaedia to adopt "continuous revision", in which the encyclopaedia is continually reprinted, with every article updated on a schedule. In the 21st century, the Britannica has suffered due to competition with the online crowdsourced encyclopaedia Wikipedia, although the Britannica was previously suffering from competition with the digital multimedia encyclopaedia Microsoft Encarta. In March 2012, it announced it would no longer publish printed editions and would focus instead on the online version. Britannica has been assessed to be politically closer to the centre of the US political spectrum than Wikipedia. The 15th edition has a three-part structure: a 12-volume Micropædia of short articles, a 17-volume Macropædia of long articles, and a single Propædia volume to give a hierarchical outline of knowledge. The Micropædia was meant for quick fact-checking and as a guide to the Macropædia; readers are advised to study the Propædia outline to understand a subject's context and to find more detailed articles. Over 70 years, the size of the Britannica has remained steady, with about 40 million words on half a million topics. Though published in the United States since 1901, the Britannica has for the most part maintained British English spelling.
2001-07-07T09:49:23Z
2023-12-31T03:50:13Z
[ "Template:Official website", "Template:Short description", "Template:Use dmy dates", "Template:Portal", "Template:Cite web", "Template:Cite press release", "Template:Refbegin", "Template:Sister project links", "Template:Pp-pc", "Template:'s", "Template:As of", "Template:Reflist", "Template:Internet Archive author", "Template:Authority control", "Template:Infobox book", "Template:Lang", "Template:Citation needed", "Template:Original research inline", "Template:Efn", "Template:Notelist", "Template:Cite book", "Template:Cbignore", "Template:Use Oxford spelling", "Template:Cite news", "Template:Cite encyclopedia", "Template:Dead link", "Template:Refend", "Template:Pp-move", "Template:Main", "Template:Librivox author", "Template:Section link", "Template:Cite journal", "Template:Redirect", "Template:Blockquote", "Template:Cite magazine", "Template:Webarchive", "Template:Cite EB1911", "Template:Cite SBDEL", "Template:Cite EB9" ]
https://en.wikipedia.org/wiki/Encyclop%C3%A6dia_Britannica
9,509
Endometrium
The endometrium is the inner epithelial layer, along with its mucous membrane, of the mammalian uterus. It has a basal layer and a functional layer: the basal layer contains stem cells which regenerate the functional layer. The functional layer thickens and then is shed during menstruation in humans and some other mammals, including apes, Old World monkeys, some species of bat, the elephant shrew and the Cairo spiny mouse. In most other mammals, the endometrium is reabsorbed in the estrous cycle. During pregnancy, the glands and blood vessels in the endometrium further increase in size and number. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus. The speculated presence of an endometrial microbiota has been argued against. The endometrium consists of a single layer of columnar epithelium plus the stroma on which it rests. The stroma is a layer of connective tissue that varies in thickness according to hormonal influences. In the uterus, simple tubular glands reach from the endometrial surface through to the base of the stroma, which also carries a rich blood supply provided by the spiral arteries. In women of reproductive age, two layers of endometrium can be distinguished. These two layers occur only in the endometrium lining the cavity of the uterus, and not in the lining of the fallopian tubes. In the absence of progesterone, the arteries supplying blood to the functional layer constrict, so that cells in that layer become ischaemic and die, leading to menstruation. It is possible to identify the phase of the menstrual cycle by reference to either the ovarian cycle or the uterine cycle by observing microscopic differences at each phase—for example in the ovarian cycle: About 20,000 protein coding genes are expressed in human cells and some 70% of these genes are expressed in the normal endometrium. Just over 100 of these genes are more specifically expressed in the endometrium with only a handful genes being highly endometrium specific. The corresponding specific proteins are expressed in the glandular and stromal cells of the endometrial mucosa. The expression of many of these proteins vary depending on the menstrual cycle, for example the progesterone receptor and thyrotropin-releasing hormone both expressed in the proliferative phase, and PAEP expressed in the secretory phase. Other proteins such as the HOX11 protein that is required for female fertility, is expressed in endometrial stroma cells throughout the menstrual cycle. Certain specific proteins such as the estrogen receptor are also expressed in other types of female tissue types, such as the cervix, fallopian tubes, ovaries and breast. The uterus and endometrium was for a long time thought to be sterile. The cervical plug of mucosa was seen to prevent the entry of any microorganisms ascending from the vagina. In the 1980s this view was challenged when it was shown that uterine infections could arise from weaknesses in the barrier of the cervical plug. Organisms from the vaginal microbiota could enter the uterus during uterine contractions in the menstrual cycle. Further studies sought to identify microbiota specific to the uterus which would be of help in identifying cases of unsuccessful IVF and miscarriages. Their findings were seen to be unreliable due to the possibility of cross-contamination in the sampling procedures used. The well-documented presence of Lactobacillus species, for example, was easily explained by an increase in the vaginal population being able to seep into the cervical mucous. Another study highlighted the flaws of the earlier studies including cross-contamination. It was also argued that the evidence from studies using germ-free offspring of axenic animals (germ-free) clearly showed the sterility of the uterus. The authors concluded that in light of these findings there was no existence of a microbiome. The normal dominance of Lactobacilli in the vagina is seen as a marker for vaginal health. However, in the uterus this much lower population is seen as invasive in a closed environment that is highly regulated by female sex hormones, and that could have unwanted consequences. In studies of endometriosis Lactobacillus is not the dominant type and there are higher levels of Streptococcus and Staphylococcus species. Half of the cases of bacterial vaginitis showed a polymicrobial biofilm attached to the endometrium. The endometrium is the innermost lining layer of the uterus, and functions to prevent adhesions between the opposed walls of the myometrium, thereby maintaining the patency of the uterine cavity. During the menstrual cycle or estrous cycle, the endometrium grows to a thick, blood vessel-rich, glandular tissue layer. This represents an optimal environment for the implantation of a blastocyst upon its arrival in the uterus. The endometrium is central, echogenic (detectable using ultrasound scanners), and has an average thickness of 6.7 mm. During pregnancy, the glands and blood vessels in the endometrium further increase in size and number. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus. The functional layer of the endometrial lining undergoes cyclic regeneration from stem cells in the basal layer. Humans, apes, and some other species display the menstrual cycle, whereas most other mammals are subject to an estrous cycle. In both cases, the endometrium initially proliferates under the influence of estrogen. However, once ovulation occurs, the ovary (specifically the corpus luteum) will produce much larger amounts of progesterone. This changes the proliferative pattern of the endometrium to a secretory lining. Eventually, the secretory lining provides a hospitable environment for one or more blastocysts. Upon fertilization, the egg may implant into the uterine wall and provide feedback to the body with human chorionic gonadotropin (hCG). hCG provides continued feedback throughout pregnancy by maintaining the corpus luteum, which will continue its role of releasing progesterone and estrogen. In case of implantation, the endometrial lining remains as decidua. The decidua becomes part of the placenta; it provides support and protection for the gestation. Without implantation of a fertilized egg, the endometrial lining is either reabsorbed (estrous cycle) or shed (menstrual cycle). In the latter case, the process of shedding involves the breaking down of the lining, the tearing of small connective blood vessels, and the loss of the tissue and blood that had constituted it through the vagina. The entire process occurs over a period of several days. Menstruation may be accompanied by a series of uterine contractions; these help expel the menstrual endometrium. If there is inadequate stimulation of the lining, due to lack of hormones, the endometrium remains thin and inactive. In humans, this will result in amenorrhea, or the absence of a menstrual period. After menopause, the lining is often described as being atrophic. In contrast, endometrium that is chronically exposed to estrogens, but not to progesterone, may become hyperplastic. Long-term use of oral contraceptives with highly potent progestins can also induce endometrial atrophy. In humans, the cycle of building and shedding the endometrial lining lasts an average of 28 days. The endometrium develops at different rates in different mammals. Various factors including the seasons, climate, and stress can affect its development. The endometrium itself produces certain hormones at different stages of the cycle and this affects other parts of the reproductive system. Chorionic tissue can result in marked endometrial changes, known as an Arias-Stella reaction, that have an appearance similar to cancer. Historically, this change was diagnosed as endometrial cancer and it is important only in so far as it should not be misdiagnosed as cancer. Thin endometrium may be defined as an endometrial thickness of less than 8 mm. It usually occurs after menopause. Treatments that can improve endometrial thickness include Vitamin E, L-arginine and sildenafil citrate. Gene expression profiling using cDNA microarray can be used for the diagnosis of endometrial disorders. The European Menopause and Andropause Society (EMAS) released Guidelines with detailed information to assess the endometrium. An endometrial thickness (EMT) of less than 7 mm decreases the pregnancy rate in in vitro fertilization by an odds ratio of approximately 0.4 compared to an EMT of over 7 mm. However, such low thickness rarely occurs, and any routine use of this parameter is regarded as not justified. The optimal endometrial thickness is 10mm. Nevertheless, in human a perfect synchrony it is not necessary, if the emdometrium is not ready to receive the embryo a ectopic pregnancy may occur. This consist of the implantation of the blast outside the uterus, which can be extremly dangerous. Observation of the endometrium by transvaginal ultrasonography is used when administering fertility medication, such as in in vitro fertilization. At the time of embryo transfer, it is favorable to have an endometrium of a thickness of between 7 and 14 mm with a triple-line configuration, which means that the endometrium contains a hyperechoic (usually displayed as light) line in the middle surrounded by two more hypoechoic (darker) lines. A triple-line endometrium reflects the separation of the basal layer and the functional layer, and is also observed in the periovulatory period secondary to rising estradiol levels, and disappears after ovulation. Endometrial thickness is also associated with live births in IVF. The live birth rate in a normal endometrium is halved when the thickness is <5mm. Estrogens stimulate endometrial proliferation and carcinogenesis. Conversely, progestogens inhibit endometrial proliferation and carcinogenesis caused by estrogens and stimulate differentiation of the endometrium into decidua, which is termed endometrial transformation or decidualization. This is mediated by the progestogenic and functional antiestrogenic effects of progestogens in this tissue. These effects of progestogens and their protection against endometrial hyperplasia and endometrial cancer caused by estrogens is referred to as endometrial protection.
[ { "paragraph_id": 0, "text": "The endometrium is the inner epithelial layer, along with its mucous membrane, of the mammalian uterus. It has a basal layer and a functional layer: the basal layer contains stem cells which regenerate the functional layer. The functional layer thickens and then is shed during menstruation in humans and some other mammals, including apes, Old World monkeys, some species of bat, the elephant shrew and the Cairo spiny mouse. In most other mammals, the endometrium is reabsorbed in the estrous cycle. During pregnancy, the glands and blood vessels in the endometrium further increase in size and number. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus. The speculated presence of an endometrial microbiota has been argued against.", "title": "" }, { "paragraph_id": 1, "text": "The endometrium consists of a single layer of columnar epithelium plus the stroma on which it rests. The stroma is a layer of connective tissue that varies in thickness according to hormonal influences. In the uterus, simple tubular glands reach from the endometrial surface through to the base of the stroma, which also carries a rich blood supply provided by the spiral arteries. In women of reproductive age, two layers of endometrium can be distinguished. These two layers occur only in the endometrium lining the cavity of the uterus, and not in the lining of the fallopian tubes.", "title": "Structure" }, { "paragraph_id": 2, "text": "In the absence of progesterone, the arteries supplying blood to the functional layer constrict, so that cells in that layer become ischaemic and die, leading to menstruation.", "title": "Structure" }, { "paragraph_id": 3, "text": "It is possible to identify the phase of the menstrual cycle by reference to either the ovarian cycle or the uterine cycle by observing microscopic differences at each phase—for example in the ovarian cycle:", "title": "Structure" }, { "paragraph_id": 4, "text": "About 20,000 protein coding genes are expressed in human cells and some 70% of these genes are expressed in the normal endometrium. Just over 100 of these genes are more specifically expressed in the endometrium with only a handful genes being highly endometrium specific. The corresponding specific proteins are expressed in the glandular and stromal cells of the endometrial mucosa. The expression of many of these proteins vary depending on the menstrual cycle, for example the progesterone receptor and thyrotropin-releasing hormone both expressed in the proliferative phase, and PAEP expressed in the secretory phase. Other proteins such as the HOX11 protein that is required for female fertility, is expressed in endometrial stroma cells throughout the menstrual cycle. Certain specific proteins such as the estrogen receptor are also expressed in other types of female tissue types, such as the cervix, fallopian tubes, ovaries and breast.", "title": "Structure" }, { "paragraph_id": 5, "text": "The uterus and endometrium was for a long time thought to be sterile. The cervical plug of mucosa was seen to prevent the entry of any microorganisms ascending from the vagina. In the 1980s this view was challenged when it was shown that uterine infections could arise from weaknesses in the barrier of the cervical plug. Organisms from the vaginal microbiota could enter the uterus during uterine contractions in the menstrual cycle. Further studies sought to identify microbiota specific to the uterus which would be of help in identifying cases of unsuccessful IVF and miscarriages. Their findings were seen to be unreliable due to the possibility of cross-contamination in the sampling procedures used. The well-documented presence of Lactobacillus species, for example, was easily explained by an increase in the vaginal population being able to seep into the cervical mucous. Another study highlighted the flaws of the earlier studies including cross-contamination. It was also argued that the evidence from studies using germ-free offspring of axenic animals (germ-free) clearly showed the sterility of the uterus. The authors concluded that in light of these findings there was no existence of a microbiome.", "title": "Structure" }, { "paragraph_id": 6, "text": "The normal dominance of Lactobacilli in the vagina is seen as a marker for vaginal health. However, in the uterus this much lower population is seen as invasive in a closed environment that is highly regulated by female sex hormones, and that could have unwanted consequences. In studies of endometriosis Lactobacillus is not the dominant type and there are higher levels of Streptococcus and Staphylococcus species. Half of the cases of bacterial vaginitis showed a polymicrobial biofilm attached to the endometrium.", "title": "Structure" }, { "paragraph_id": 7, "text": "The endometrium is the innermost lining layer of the uterus, and functions to prevent adhesions between the opposed walls of the myometrium, thereby maintaining the patency of the uterine cavity. During the menstrual cycle or estrous cycle, the endometrium grows to a thick, blood vessel-rich, glandular tissue layer. This represents an optimal environment for the implantation of a blastocyst upon its arrival in the uterus. The endometrium is central, echogenic (detectable using ultrasound scanners), and has an average thickness of 6.7 mm.", "title": "Function" }, { "paragraph_id": 8, "text": "During pregnancy, the glands and blood vessels in the endometrium further increase in size and number. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus.", "title": "Function" }, { "paragraph_id": 9, "text": "The functional layer of the endometrial lining undergoes cyclic regeneration from stem cells in the basal layer. Humans, apes, and some other species display the menstrual cycle, whereas most other mammals are subject to an estrous cycle. In both cases, the endometrium initially proliferates under the influence of estrogen. However, once ovulation occurs, the ovary (specifically the corpus luteum) will produce much larger amounts of progesterone. This changes the proliferative pattern of the endometrium to a secretory lining. Eventually, the secretory lining provides a hospitable environment for one or more blastocysts.", "title": "Function" }, { "paragraph_id": 10, "text": "Upon fertilization, the egg may implant into the uterine wall and provide feedback to the body with human chorionic gonadotropin (hCG). hCG provides continued feedback throughout pregnancy by maintaining the corpus luteum, which will continue its role of releasing progesterone and estrogen. In case of implantation, the endometrial lining remains as decidua. The decidua becomes part of the placenta; it provides support and protection for the gestation.", "title": "Function" }, { "paragraph_id": 11, "text": "Without implantation of a fertilized egg, the endometrial lining is either reabsorbed (estrous cycle) or shed (menstrual cycle). In the latter case, the process of shedding involves the breaking down of the lining, the tearing of small connective blood vessels, and the loss of the tissue and blood that had constituted it through the vagina. The entire process occurs over a period of several days. Menstruation may be accompanied by a series of uterine contractions; these help expel the menstrual endometrium.", "title": "Function" }, { "paragraph_id": 12, "text": "If there is inadequate stimulation of the lining, due to lack of hormones, the endometrium remains thin and inactive. In humans, this will result in amenorrhea, or the absence of a menstrual period. After menopause, the lining is often described as being atrophic. In contrast, endometrium that is chronically exposed to estrogens, but not to progesterone, may become hyperplastic. Long-term use of oral contraceptives with highly potent progestins can also induce endometrial atrophy.", "title": "Function" }, { "paragraph_id": 13, "text": "In humans, the cycle of building and shedding the endometrial lining lasts an average of 28 days. The endometrium develops at different rates in different mammals. Various factors including the seasons, climate, and stress can affect its development. The endometrium itself produces certain hormones at different stages of the cycle and this affects other parts of the reproductive system.", "title": "Function" }, { "paragraph_id": 14, "text": "Chorionic tissue can result in marked endometrial changes, known as an Arias-Stella reaction, that have an appearance similar to cancer. Historically, this change was diagnosed as endometrial cancer and it is important only in so far as it should not be misdiagnosed as cancer.", "title": "Diseases related with endometrium" }, { "paragraph_id": 15, "text": "Thin endometrium may be defined as an endometrial thickness of less than 8 mm. It usually occurs after menopause. Treatments that can improve endometrial thickness include Vitamin E, L-arginine and sildenafil citrate.", "title": "Diseases related with endometrium" }, { "paragraph_id": 16, "text": "Gene expression profiling using cDNA microarray can be used for the diagnosis of endometrial disorders. The European Menopause and Andropause Society (EMAS) released Guidelines with detailed information to assess the endometrium.", "title": "Diseases related with endometrium" }, { "paragraph_id": 17, "text": "", "title": "Diseases related with endometrium" }, { "paragraph_id": 18, "text": "An endometrial thickness (EMT) of less than 7 mm decreases the pregnancy rate in in vitro fertilization by an odds ratio of approximately 0.4 compared to an EMT of over 7 mm. However, such low thickness rarely occurs, and any routine use of this parameter is regarded as not justified. The optimal endometrial thickness is 10mm. Nevertheless, in human a perfect synchrony it is not necessary, if the emdometrium is not ready to receive the embryo a ectopic pregnancy may occur. This consist of the implantation of the blast outside the uterus, which can be extremly dangerous.", "title": "Diseases related with endometrium" }, { "paragraph_id": 19, "text": "Observation of the endometrium by transvaginal ultrasonography is used when administering fertility medication, such as in in vitro fertilization. At the time of embryo transfer, it is favorable to have an endometrium of a thickness of between 7 and 14 mm with a triple-line configuration, which means that the endometrium contains a hyperechoic (usually displayed as light) line in the middle surrounded by two more hypoechoic (darker) lines. A triple-line endometrium reflects the separation of the basal layer and the functional layer, and is also observed in the periovulatory period secondary to rising estradiol levels, and disappears after ovulation.", "title": "Diseases related with endometrium" }, { "paragraph_id": 20, "text": "Endometrial thickness is also associated with live births in IVF. The live birth rate in a normal endometrium is halved when the thickness is <5mm.", "title": "Diseases related with endometrium" }, { "paragraph_id": 21, "text": "Estrogens stimulate endometrial proliferation and carcinogenesis. Conversely, progestogens inhibit endometrial proliferation and carcinogenesis caused by estrogens and stimulate differentiation of the endometrium into decidua, which is termed endometrial transformation or decidualization. This is mediated by the progestogenic and functional antiestrogenic effects of progestogens in this tissue. These effects of progestogens and their protection against endometrial hyperplasia and endometrial cancer caused by estrogens is referred to as endometrial protection.", "title": "Endometrial protection" } ]
The endometrium is the inner epithelial layer, along with its mucous membrane, of the mammalian uterus. It has a basal layer and a functional layer: the basal layer contains stem cells which regenerate the functional layer. The functional layer thickens and then is shed during menstruation in humans and some other mammals, including apes, Old World monkeys, some species of bat, the elephant shrew and the Cairo spiny mouse. In most other mammals, the endometrium is reabsorbed in the estrous cycle. During pregnancy, the glands and blood vessels in the endometrium further increase in size and number. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus. The speculated presence of an endometrial microbiota has been argued against.
2001-06-12T19:48:34Z
2023-12-28T17:20:22Z
[ "Template:Webarchive", "Template:SUNYAnatomyFigs", "Template:OklahomaHistology", "Template:Authority control", "Template:Short description", "Template:Further", "Template:Cite web", "Template:Infobox anatomy", "Template:Anchor", "Template:Cite book", "Template:EmbryologySwiss", "Template:Reflist", "Template:Cite journal", "Template:BUHistology", "Template:Female reproductive system" ]
https://en.wikipedia.org/wiki/Endometrium
9,510
Electronic music
Error: no inner hatnotes detected (help). Electronic music is a genre of music that employs electronic musical instruments, digital instruments, or circuitry-based music technology in its creation. It includes both music made using electronic and electromechanical means (electroacoustic music). Pure electronic instruments depended entirely on circuitry-based sound generation, for instance using devices such as an electronic oscillator, theremin, or synthesizer. Electromechanical instruments can have mechanical parts such as strings, hammers, and electric elements including magnetic pickups, power amplifiers and loudspeakers. Such electromechanical devices include the telharmonium, Hammond organ, electric piano and the electric guitar. The first electronic musical devices were developed at the end of the 19th century. During the 1920s and 1930s, some electronic instruments were introduced and the first compositions featuring them were written. By the 1940s, magnetic audio tape allowed musicians to tape sounds and then modify them by changing the tape speed or direction, leading to the development of electroacoustic tape music in the 1940s, in Egypt and France. Musique concrète, created in Paris in 1948, was based on editing together recorded fragments of natural and industrial sounds. Music produced solely from electronic generators was first produced in Germany in 1953 by Karlheinz Stockhausen. Electronic music was also created in Japan and the United States beginning in the 1950s and Algorithmic composition with computers was first demonstrated in the same decade. During the 1960s, digital computer music was pioneered, innovation in live electronics took place, and Japanese electronic musical instruments began to influence the music industry. In the early 1970s, Moog synthesizers and drum machines helped popularize synthesized electronic music. The 1970s also saw electronic music begin to have a significant influence on popular music, with the adoption of polyphonic synthesizers, electronic drums, drum machines, and turntables, through the emergence of genres such as disco, krautrock, new wave, synth-pop, hip hop, and EDM. In the early 1980s mass-produced digital synthesizers, such as the Yamaha DX7, became popular, and MIDI (Musical Instrument Digital Interface) was developed. In the same decade, with a greater reliance on synthesizers and the adoption of programmable drum machines, electronic popular music came to the fore. During the 1990s, with the proliferation of increasingly affordable music technology, electronic music production became an established part of popular culture. In Berlin starting in 1989, the Love Parade became the largest street party with over 1 million visitors, inspiring other such popular celebrations of electronic music. Contemporary electronic music includes many varieties and ranges from experimental art music to popular forms such as electronic dance music. Pop electronic music is most recognizable in its 4/4 form and more connected with the mainstream than preceding forms which were popular in niche markets. At the turn of the 20th century, experimentation with emerging electronics led to the first electronic musical instruments. These initial inventions were not sold, but were instead used in demonstrations and public performances. The audiences were presented with reproductions of existing music instead of new compositions for the instruments. While some were considered novelties and produced simple tones, the Telharmonium synthesized the sound of several orchestral instruments with reasonable precision. It achieved viable public interest and made commercial progress into streaming music through telephone networks. Critics of musical conventions at the time saw promise in these developments. Ferruccio Busoni encouraged the composition of microtonal music allowed for by electronic instruments. He predicted the use of machines in future music, writing the influential Sketch of a New Esthetic of Music (1907). Futurists such as Francesco Balilla Pratella and Luigi Russolo began composing music with acoustic noise to evoke the sound of machinery. They predicted expansions in timbre allowed for by electronics in the influential manifesto The Art of Noises (1913). Developments of the vacuum tube led to electronic instruments that were smaller, amplified, and more practical for performance. In particular, the theremin, ondes Martenot and trautonium were commercially produced by the early 1930s. From the late 1920s, the increased practicality of electronic instruments influenced composers such as Joseph Schillinger to adopt them. They were typically used within orchestras, and most composers wrote parts for the theremin that could otherwise be performed with string instruments. Avant-garde composers criticized the predominant use of electronic instruments for conventional purposes. The instruments offered expansions in pitch resources that were exploited by advocates of microtonal music such as Charles Ives, Dimitrios Levidis, Olivier Messiaen and Edgard Varèse. Further, Percy Grainger used the theremin to abandon fixed tonation entirely, while Russian composers such as Gavriil Popov treated it as a source of noise in otherwise-acoustic noise music. Developments in early recording technology paralleled that of electronic instruments. The first means of recording and reproducing audio was invented in the late 19th century with the mechanical phonograph. Record players became a common household item, and by the 1920s composers were using them to play short recordings in performances. The introduction of electrical recording in 1925 was followed by increased experimentation with record players. Paul Hindemith and Ernst Toch composed several pieces in 1930 by layering recordings of instruments and vocals at adjusted speeds. Influenced by these techniques, John Cage composed Imaginary Landscape No. 1 in 1939 by adjusting the speeds of recorded tones. Composers began to experiment with newly developed sound-on-film technology. Recordings could be spliced together to create sound collages, such as those by Tristan Tzara, Kurt Schwitters, Filippo Tommaso Marinetti, Walter Ruttmann and Dziga Vertov. Further, the technology allowed sound to be graphically created and modified. These techniques were used to compose soundtracks for several films in Germany and Russia, in addition to the popular Dr. Jekyll and Mr. Hyde in the United States. Experiments with graphical sound were continued by Norman McLaren from the late 1930s. The first practical audio tape recorder was unveiled in 1935. Improvements to the technology were made using the AC biasing technique, which significantly improved recording fidelity. As early as 1942, test recordings were being made in stereo. Although these developments were initially confined to Germany, recorders and tapes were brought to the United States following the end of World War II. These were the basis for the first commercially produced tape recorder in 1948. In 1944, before the use of magnetic tape for compositional purposes, Egyptian composer Halim El-Dabh, while still a student in Cairo, used a cumbersome wire recorder to record sounds of an ancient zaar ceremony. Using facilities at the Middle East Radio studios El-Dabh processed the recorded material using reverberation, echo, voltage controls and re-recording. What resulted is believed to be the earliest tape music composition. The resulting work was entitled The Expression of Zaar and it was presented in 1944 at an art gallery event in Cairo. While his initial experiments in tape-based composition were not widely known outside of Egypt at the time, El-Dabh is also known for his later work in electronic music at the Columbia-Princeton Electronic Music Center in the late 1950s. Following his work with Studio d'Essai at Radiodiffusion Française (RDF), during the early 1940s, Pierre Schaeffer is credited with originating the theory and practice of musique concrète. In the late 1940s, experiments in sound-based composition using shellac record players were first conducted by Schaeffer. In 1950, the techniques of musique concrete were expanded when magnetic tape machines were used to explore sound manipulation practices such as speed variation (pitch shift) and tape splicing. On 5 October 1948, RDF broadcast Schaeffer's Etude aux chemins de fer. This was the first "movement" of Cinq études de bruits, and marked the beginning of studio realizations and musique concrète (or acousmatic art). Schaeffer employed a disc cutting lathe, four turntables, a four-channel mixer, filters, an echo chamber, and a mobile recording unit. Not long after this, Pierre Henry began collaborating with Schaeffer, a partnership that would have profound and lasting effects on the direction of electronic music. Another associate of Schaeffer, Edgard Varèse, began work on Déserts, a work for chamber orchestra and tape. The tape parts were created at Pierre Schaeffer's studio and were later revised at Columbia University. In 1950, Schaeffer gave the first public (non-broadcast) concert of musique concrète at the École Normale de Musique de Paris. "Schaeffer used a PA system, several turntables, and mixers. The performance did not go well, as creating live montages with turntables had never been done before." Later that same year, Pierre Henry collaborated with Schaeffer on Symphonie pour un homme seul (1950) the first major work of musique concrete. In Paris in 1951, in what was to become an important worldwide trend, RTF established the first studio for the production of electronic music. Also in 1951, Schaeffer and Henry produced an opera, Orpheus, for concrete sounds and voices. By 1951 the work of Schaeffer, composer-percussionist Pierre Henry, and sound engineer Jacques Poullin had received official recognition and The Groupe de Recherches de Musique Concrète, Club d 'Essai de la Radiodiffusion-Télévision Française was established at RTF in Paris, the ancestor of the ORTF. Karlheinz Stockhausen worked briefly in Schaeffer's studio in 1952, and afterward for many years at the WDR Cologne's Studio for Electronic Music. 1954 saw the advent of what would now be considered authentic electric plus acoustic compositions—acoustic instrumentation augmented/accompanied by recordings of manipulated or electronically generated sound. Three major works were premiered that year: Varèse's Déserts, for chamber ensemble and tape sounds, and two works by Otto Luening and Vladimir Ussachevsky: Rhapsodic Variations for the Louisville Symphony and A Poem in Cycles and Bells, both for orchestra and tape. Because he had been working at Schaeffer's studio, the tape part for Varèse's work contains much more concrete sounds than electronic. "A group made up of wind instruments, percussion and piano alternate with the mutated sounds of factory noises and ship sirens and motors, coming from two loudspeakers." At the German premiere of Déserts in Hamburg, which was conducted by Bruno Maderna, the tape controls were operated by Karlheinz Stockhausen. The title Déserts suggested to Varèse not only "all physical deserts (of sand, sea, snow, of outer space, of empty streets), but also the deserts in the mind of man; not only those stripped aspects of nature that suggest bareness, aloofness, timelessness, but also that remote inner space no telescope can reach, where man is alone, a world of mystery and essential loneliness." In Cologne, what would become the most famous electronic music studio in the world, was officially opened at the radio studios of the NWDR in 1953, though it had been in the planning stages as early as 1950 and early compositions were made and broadcast in 1951. The brainchild of Werner Meyer-Eppler, Robert Beyer, and Herbert Eimert (who became its first director), the studio was soon joined by Karlheinz Stockhausen and Gottfried Michael Koenig. In his 1949 thesis Elektronische Klangerzeugung: Elektronische Musik und Synthetische Sprache, Meyer-Eppler conceived the idea to synthesize music entirely from electronically produced signals; in this way, elektronische Musik was sharply differentiated from French musique concrète, which used sounds recorded from acoustical sources. In 1953, Stockhausen composed his Studie I, followed in 1954 by Elektronische Studie II—the first electronic piece to be published as a score. In 1955, more experimental and electronic studios began to appear. Notable were the creation of the Studio di fonologia musicale di Radio Milano, a studio at the NHK in Tokyo founded by Toshiro Mayuzumi, and the Philips studio at Eindhoven, the Netherlands, which moved to the University of Utrecht as the Institute of Sonology in 1960. "With Stockhausen and Mauricio Kagel in residence, [Cologne] became a year-round hive of charismatic avante-gardism." on two occasions combining electronically generated sounds with relatively conventional orchestras—in Mixtur (1964) and Hymnen, dritte Region mit Orchester (1967). Stockhausen stated that his listeners had told him his electronic music gave them an experience of "outer space", sensations of flying, or being in a "fantastic dream world". In the United States, electronic music was being created as early as 1939, when John Cage published Imaginary Landscape, No. 1, using two variable-speed turntables, frequency recordings, muted piano, and cymbal, but no electronic means of production. Cage composed five more "Imaginary Landscapes" between 1942 and 1952 (one withdrawn), mostly for percussion ensemble, though No. 4 is for twelve radios and No. 5, written in 1952, uses 42 recordings and is to be realized as a magnetic tape. According to Otto Luening, Cage also performed Williams Mix at Donaueschingen in 1954, using eight loudspeakers, three years after his alleged collaboration. Williams Mix was a success at the Donaueschingen Festival, where it made a "strong impression". The Music for Magnetic Tape Project was formed by members of the New York School (John Cage, Earle Brown, Christian Wolff, David Tudor, and Morton Feldman), and lasted three years until 1954. Cage wrote of this collaboration: "In this social darkness, therefore, the work of Earle Brown, Morton Feldman, and Christian Wolff continues to present a brilliant light, for the reason that at the several points of notation, performance, and audition, action is provocative." Cage completed Williams Mix in 1953 while working with the Music for Magnetic Tape Project. The group had no permanent facility, and had to rely on borrowed time in commercial sound studios, including the studio of Bebe and Louis Barron. In the same year Columbia University purchased its first tape recorder—a professional Ampex machine—to record concerts. Vladimir Ussachevsky, who was on the music faculty of Columbia University, was placed in charge of the device, and almost immediately began experimenting with it. Herbert Russcol writes: "Soon he was intrigued with the new sonorities he could achieve by recording musical instruments and then superimposing them on one another." Ussachevsky said later: "I suddenly realized that the tape recorder could be treated as an instrument of sound transformation." On Thursday, 8 May 1952, Ussachevsky presented several demonstrations of tape music/effects that he created at his Composers Forum, in the McMillin Theatre at Columbia University. These included Transposition, Reverberation, Experiment, Composition, and Underwater Valse. In an interview, he stated: "I presented a few examples of my discovery in a public concert in New York together with other compositions I had written for conventional instruments." Otto Luening, who had attended this concert, remarked: "The equipment at his disposal consisted of an Ampex tape recorder . . . and a simple box-like device designed by the brilliant young engineer, Peter Mauzey, to create feedback, a form of mechanical reverberation. Other equipment was borrowed or purchased with personal funds." Just three months later, in August 1952, Ussachevsky traveled to Bennington, Vermont, at Luening's invitation to present his experiments. There, the two collaborated on various pieces. Luening described the event: "Equipped with earphones and a flute, I began developing my first tape-recorder composition. Both of us were fluent improvisors and the medium fired our imaginations." They played some early pieces informally at a party, where "a number of composers almost solemnly congratulated us saying, 'This is it' ('it' meaning the music of the future)." Word quickly reached New York City. Oliver Daniel telephoned and invited the pair to "produce a group of short compositions for the October concert sponsored by the American Composers Alliance and Broadcast Music, Inc., under the direction of Leopold Stokowski at the Museum of Modern Art in New York. After some hesitation, we agreed. . . . Henry Cowell placed his home and studio in Woodstock, New York, at our disposal. With the borrowed equipment in the back of Ussachevsky's car, we left Bennington for Woodstock and stayed two weeks. . . . In late September 1952, the travelling laboratory reached Ussachevsky's living room in New York, where we eventually completed the compositions." Two months later, on 28 October, Vladimir Ussachevsky and Otto Luening presented the first Tape Music concert in the United States. The concert included Luening's Fantasy in Space (1952)—"an impressionistic virtuoso piece" using manipulated recordings of flute—and Low Speed (1952), an "exotic composition that took the flute far below its natural range." Both pieces were created at the home of Henry Cowell in Woodstock, New York. After several concerts caused a sensation in New York City, Ussachevsky and Luening were invited onto a live broadcast of NBC's Today Show to do an interview demonstration—the first televised electroacoustic performance. Luening described the event: "I improvised some [flute] sequences for the tape recorder. Ussachevsky then and there put them through electronic transformations." The score for Forbidden Planet, by Louis and Bebe Barron, was entirely composed using custom-built electronic circuits and tape recorders in 1956 (but no synthesizers in the modern sense of the word). In 1929, Nikolai Obukhov invented the "sounding cross" (la croix sonore), comparable to the principle of the theremin. In the 1930s, Nikolai Ananyev invented "sonar", and engineer Alexander Gurov — neoviolena, I. Ilsarov — ilston., A. Rimsky-Korsakov [ru] and A. Ivanov — emiriton [ru]. Composer and inventor Arseny Avraamov was engaged in scientific work on sound synthesis and conducted a number of experiments that would later form the basis of Soviet electro-musical instruments. In 1956 Vyacheslav Mescherin created the Ensemble of electro-musical instruments [ru], which used theremins, electric harps, electric organs, the first synthesizer in the USSR "Ekvodin", and also created the first Soviet reverb machine. The style in which Meshcherin's ensemble played is known as "Space age pop". In 1957, engineer Igor Simonov assembled a working model of a noise recorder (electroeoliphone), with the help of which it was possible to extract various timbres and consonances of a noise nature. In 1958, Evgeny Murzin designed ANS synthesizer, one of the world's first polyphonic musical synthesizers. Founded by Murzin in 1966, the Moscow Experimental Electronic Music Studio became the base for a new generation of experimenters – Eduard Artemyev, Alexander Nemtin [ru], Sándor Kallós, Sofia Gubaidulina, Alfred Schnittke, and Vladimir Martynov. By the end of the 1960s, musical groups playing light electronic music appeared in the USSR. At the state level, this music began to be used to attract foreign tourists to the country and for broadcasting to foreign countries. In the mid-1970s, composer Alexander Zatsepin designed an "orchestrolla" – a modification of the mellotron. The Baltic Soviet Rebublics also had their own pioneers: in Estonian SSR — Sven Grunberg, in Lithuanian SSR — Gedrus Kupriavicius, in Latvian SSR — Opus and Zodiac. The world's first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey and Maston Beard. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the Colonel Bogey March, of which no known recordings exist, only the accurate reconstruction. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice. CSIRAC was never recorded, but the music played was accurately reconstructed. The oldest known recordings of computer-generated music were played by the Ferranti Mark 1 computer, a commercial version of the Baby Machine from the University of Manchester in the autumn of 1951. The music program was written by Christopher Strachey. The earliest group of electronic musical instruments in Japan, Yamaha Magna Organ was built in 1935. however, after World War II, Japanese composers such as Minao Shibata knew of the development of electronic musical instruments. By the late 1940s, Japanese composers began experimenting with electronic music and institutional sponsorship enabled them to experiment with advanced equipment. Their infusion of Asian music into the emerging genre would eventually support Japan's popularity in the development of music technology several decades later. Following the foundation of electronics company Sony in 1946, composers Toru Takemitsu and Minao Shibata independently explored possible uses for electronic technology to produce music. Takemitsu had ideas similar to musique concrète, which he was unaware of, while Shibata foresaw the development of synthesizers and predicted a drastic change in music. Sony began producing popular magnetic tape recorders for government and public use. The avant-garde collective Jikken Kōbō (Experimental Workshop), founded in 1950, was offered access to emerging audio technology by Sony. The company hired Toru Takemitsu to demonstrate their tape recorders with compositions and performances of electronic tape music. The first electronic tape pieces by the group were "Toraware no Onna" ("Imprisoned Woman") and "Piece B", composed in 1951 by Kuniharu Akiyama. Many of the electroacoustic tape pieces they produced were used as incidental music for radio, film, and theatre. They also held concerts employing a slide show synchronized with a recorded soundtrack. Composers outside of the Jikken Kōbō, such as Yasushi Akutagawa, Saburo Tominaga, and Shirō Fukai, were also experimenting with radiophonic tape music between 1952 and 1953. Musique concrète was introduced to Japan by Toshiro Mayuzumi, who was influenced by a Pierre Schaeffer concert. From 1952, he composed tape music pieces for a comedy film, a radio broadcast, and a radio drama. However, Schaeffer's concept of sound object was not influential among Japanese composers, who were mainly interested in overcoming the restrictions of human performance. This led to several Japanese electroacoustic musicians making use of serialism and twelve-tone techniques, evident in Yoshirō Irino's 1951 dodecaphonic piece "Concerto da Camera", in the organization of electronic sounds in Mayuzumi's "X, Y, Z for Musique Concrète", and later in Shibata's electronic music by 1956. Modelling the NWDR studio in Cologne, established an NHK electronic music studio in Tokyo in 1954, which became one of the world's leading electronic music facilities.The NHK electronic music studio was equipped with technologies such as tone-generating and audio processing equipment, recording and radiophonic equipment, ondes Martenot, Monochord and Melochord, sine-wave oscillators, tape recorders, ring modulators, band-pass filters, and four- and eight-channel mixers. Musicians associated with the studio included Toshiro Mayuzumi, Minao Shibata, Joji Yuasa, Toshi Ichiyanagi, and Toru Takemitsu. The studio's first electronic compositions were completed in 1955, including Mayuzumi's five-minute pieces "Studie I: Music for Sine Wave by Proportion of Prime Number", "Music for Modulated Wave by Proportion of Prime Number" and "Invention for Square Wave and Sawtooth Wave" produced using the studio's various tone-generating capabilities, and Shibata's 20-minute stereo piece "Musique Concrète for Stereophonic Broadcast". The impact of computers continued in 1956. Lejaren Hiller and Leonard Isaacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition. "... Hiller postulated that a computer could be taught the rules of a particular style and then called on to compose accordingly." Later developments included the work of Max Mathews at Bell Laboratories, who developed the influential MUSIC I program in 1957, one of the first computer programs to play electronic music. Vocoder technology was also a major development in this early era. In 1956, Stockhausen composed Gesang der Jünglinge, the first major work of the Cologne studio, based on a text from the Book of Daniel. An important technological development of that year was the invention of the Clavivox synthesizer by Raymond Scott with subassembly by Robert Moog. In 1957, Kid Baltan (Dick Raaymakers) and Tom Dissevelt released their debut album, Song Of The Second Moon, recorded at the Philips studio in the Netherlands. The public remained interested in the new sounds being created around the world, as can be deduced by the inclusion of Varèse's Poème électronique, which was played over four hundred loudspeakers at the Philips Pavilion of the 1958 Brussels World Fair. That same year, Mauricio Kagel, an Argentine composer, composed Transición II. The work was realized at the WDR studio in Cologne. Two musicians performed on the piano, one in the traditional manner, the other playing on the strings, frame, and case. Two other performers used tape to unite the presentation of live sounds with the future of prerecorded materials from later on and its past of recordings made earlier in the performance. In 1958, Columbia-Princeton developed the RCA Mark II Sound Synthesizer, the first programmable synthesizer. Prominent composers such as Vladimir Ussachevsky, Otto Luening, Milton Babbitt, Charles Wuorinen, Halim El-Dabh, Bülent Arel and Mario Davidovsky used the RCA Synthesizer extensively in various compositions. One of the most influential composers associated with the early years of the studio was Egypt's Halim El-Dabh who, after having developed the earliest known electronic tape music in 1944, became more famous for Leiyla and the Poet, a 1959 series of electronic compositions that stood out for its immersion and seamless fusion of electronic and folk music, in contrast to the more mathematical approach used by serial composers of the time such as Babbitt. El-Dabh's Leiyla and the Poet, released as part of the album Columbia-Princeton Electronic Music Center in 1961, would be cited as a strong influence by a number of musicians, ranging from Neil Rolnick, Charles Amirkhanian and Alice Shields to rock musicians Frank Zappa and The West Coast Pop Art Experimental Band. Following the emergence of differences within the GRMC (Groupe de Recherche de Musique Concrète) Pierre Henry, Philippe Arthuys, and several of their colleagues, resigned in April 1958. Schaeffer created a new collective, called Groupe de Recherches Musicales (GRM) and set about recruiting new members including Luc Ferrari, Beatriz Ferreyra, François-Bernard Mâche, Iannis Xenakis, Bernard Parmegiani, and Mireille Chamass-Kyrou. Later arrivals included Ivo Malec, Philippe Carson, Romuald Vandelle, Edgardo Canton and François Bayle. These were fertile years for electronic music—not just for academia, but for independent artists as synthesizer technology became more accessible. By this time, a strong community of composers and musicians working with new sounds and instruments was established and growing. 1960 witnessed the composition of Luening's Gargoyles for violin and tape as well as the premiere of Stockhausen's Kontakte for electronic sounds, piano, and percussion. This piece existed in two versions—one for 4-channel tape, and the other for tape with human performers. "In Kontakte, Stockhausen abandoned traditional musical form based on linear development and dramatic climax. This new approach, which he termed 'moment form', resembles the 'cinematic splice' techniques in early twentieth-century film." The theremin had been in use since the 1920s but it attained a degree of popular recognition through its use in science-fiction film soundtrack music in the 1950s (e.g., Bernard Herrmann's classic score for The Day the Earth Stood Still). In the UK in this period, the BBC Radiophonic Workshop (established in 1958) came to prominence, thanks in large measure to their work on the BBC science-fiction series Doctor Who. One of the most influential British electronic artists in this period was Workshop staffer Delia Derbyshire, who is now famous for her 1963 electronic realisation of the iconic Doctor Who theme, composed by Ron Grainer. During the time of the UNESCO fellowship for studies in electronic music (1958) Josef Tal went on a study tour in the US and Canada. He summarized his conclusions in two articles that he submitted to UNESCO. In 1961, he established the Centre for Electronic Music in Israel at The Hebrew University of Jerusalem. In 1962, Hugh Le Caine arrived in Jerusalem to install his Creative Tape Recorder in the centre. In the 1990s Tal conducted, together with Dr. Shlomo Markel, in cooperation with the Technion – Israel Institute of Technology, and the Volkswagen Foundation a research project ('Talmark') aimed at the development of a novel musical notation system for electronic music. Milton Babbitt composed his first electronic work using the synthesizer—his Composition for Synthesizer (1961)—which he created using the RCA synthesizer at the Columbia-Princeton Electronic Music Center. For Babbitt, the RCA synthesizer was a dream come true for three reasons. First, the ability to pinpoint and control every musical element precisely. Second, the time needed to realize his elaborate serial structures were brought within practical reach. Third, the question was no longer "What are the limits of the human performer?" but rather "What are the limits of human hearing?" The collaborations also occurred across oceans and continents. In 1961, Ussachevsky invited Varèse to the Columbia-Princeton Studio (CPEMC). Upon arrival, Varese embarked upon a revision of Déserts. He was assisted by Mario Davidovsky and Bülent Arel. The intense activity occurring at CPEMC and elsewhere inspired the establishment of the San Francisco Tape Music Center in 1963 by Morton Subotnick, with additional members Pauline Oliveros, Ramon Sender, Anthony Martin, and Terry Riley. Later, the Center moved to Mills College, directed by Pauline Oliveros, and has since been renamed Center for Contemporary Music. Pietro Grossi was an Italian pioneer of computer composition and tape music, who first experimented with electronic techniques in the early sixties. Grossi was a cellist and composer, born in Venice in 1917. He founded the S 2F M (Studio de Fonologia Musicale di Firenze) in 1963 to experiment with electronic sound and composition. Simultaneously in San Francisco, composer Stan Shaff and equipment designer Doug McEachern, presented the first "Audium" concert at San Francisco State College (1962), followed by work at the San Francisco Museum of Modern Art (1963), conceived of as in time, controlled movement of sound in space. Twelve speakers surrounded the audience, four speakers were mounted on a rotating, mobile-like construction above. In an SFMOMA performance the following year (1964), San Francisco Chronicle music critic Alfred Frankenstein commented, "the possibilities of the space-sound continuum have seldom been so extensively explored". In 1967, the first Audium, a "sound-space continuum" opened, holding weekly performances through 1970. In 1975, enabled by seed money from the National Endowment for the Arts, a new Audium opened, designed floor to ceiling for spatial sound composition and performance. "In contrast, there are composers who manipulated sound space by locating multiple speakers at various locations in a performance space and then switching or panning the sound between the sources. In this approach, the composition of spatial manipulation is dependent on the location of the speakers and usually exploits the acoustical properties of the enclosure. Examples include Varese's Poeme Electronique (tape music performed in the Philips Pavilion of the 1958 World Fair, Brussels) and Stan Schaff's Audium installation, currently active in San Francisco." Through weekly programs (over 4,500 in 40 years), Shaff "sculpts" sound, performing now-digitized spatial works live through 176 speakers. Jean-Jacques Perrey experimented with Schaeffer's techniques on tape loops and was among the first to use the recently released Moog synthesizer developed by Robert Moog. With this instrument he composed some works with Gershon Kingsley and solo. A well-known example of the use of Moog's full-sized Moog modular synthesizer is the 1968 Switched-On Bach album by Wendy Carlos, which triggered a craze for synthesizer music. In 1969 David Tudor brought a Moog modular synthesizer and Ampex tape machines to the National Institute of Design in Ahmedabad with the support of the Sarabhai family, forming the foundation of India's first electronic music studio. Here a group of composers Jinraj Joshipura, Gita Sarabhai, SC Sharma, IS Mathur and Atul Desai developed experimental sound compositions between 1969 and 1973. Musical melodies were first generated by the computer CSIRAC in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were obviously speculative). Research has shown that people speculated about computers playing music, possibly because computers would make noises, but there is no evidence that they actually did it. The world's first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey and Maston Beard in the 1950s. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the "Colonel Bogey March" of which no known recordings exist. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice which is current computer-music practice. The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark I, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, "Ba, Ba Black Sheep", and "In the Mood" and this is recognised as the earliest recording of a computer to play music. This recording can be heard at this Manchester University site. Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on SoundCloud. The late 1950s, 1960s, and 1970s also saw the development of large mainframe computer synthesis. Starting in 1957, Max Mathews of Bell Labs developed the MUSIC programs, culminating in MUSIC V, a direct digital synthesis language. Laurie Spiegel developed the algorithmic musical composition software "Music Mouse" (1986) for Macintosh, Amiga, and Atari computers. An important new development was the advent of computers to compose music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a composing method that uses mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962), Morsima-Amorsima, ST/10, and Atrées. He developed the computer system UPIC for translating graphical images into musical results and composed Mycènes Alpha (1978) with it. In Europe in 1964, Karlheinz Stockhausen composed Mikrophonie I for tam-tam, hand-held microphones, filters, and potentiometers, and Mixtur for orchestra, four sine-wave generators, and four ring modulators. In 1965 he composed Mikrophonie II for choir, Hammond organ, and ring modulators. In 1966–1967, Reed Ghazala discovered and began to teach "circuit bending"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage's aleatoric music [sic] concept. Cosey Fanni Tutti's performance art and musical career explored the concept of 'acceptable' music and she went on to explore the use of sound as a means of desire or discomfort. Wendy Carlos performed selections from her album Switched-On Bach on stage with a synthesizer with the St. Louis Symphony Orchestra; another live performance was with Kurzweil Baroque Ensemble for "Bach at the Beacon" in 1997. In June 2018, Suzanne Ciani released LIVE Quadraphonic, a live album documenting her first solo performance on a Buchla synthesizer in 40 years. It was one of the first quadraphonic vinyl releases in over 30 years. In the 1950s, Japanese electronic musical instruments began influencing the international music industry. Ikutaro Kakehashi, who founded Ace Tone in 1960, developed his own version of electronic percussion that had been already popular on the overseas electronic organ. At the 1964 NAMM Show, he revealed it as the R-1 Rhythm Ace, a hand-operated percussion device that played electronic drum sounds manually as the user pushed buttons, in a similar fashion to modern electronic drum pads. In 1963, Korg released the Donca-Matic DA-20, an electro-mechanical drum machine. In 1965, Nippon Columbia patented a fully electronic drum machine. Korg released the Donca-Matic DC-11 electronic drum machine in 1966, which they followed with the Korg Mini Pops, which was developed as an option for the Yamaha Electone electric organ. Korg's Stageman and Mini Pops series were notable for "natural metallic percussion" sounds and incorporating controls for drum "breaks and fill-ins." In 1967, Ace Tone founder Ikutaro Kakehashi patented a preset rhythm-pattern generator using diode matrix circuit similar to the Seeburg's prior U.S. Patent 3,358,068 filed in 1964 (See Drum machine#History), which he released as the FR-1 Rhythm Ace drum machine the same year. It offered 16 preset patterns, and four buttons to manually play each instrument sound (cymbal, claves, cowbell and bass drum). The rhythm patterns could also be cascaded together by pushing multiple rhythm buttons simultaneously, and the possible combination of rhythm patterns were more than a hundred. Ace Tone's Rhythm Ace drum machines found their way into popular music from the late 1960s, followed by Korg drum machines in the 1970s. Kakehashi later left Ace Tone and founded Roland Corporation in 1972, with Roland synthesizers and drum machines becoming highly influential for the next several decades. The company would go on to have a big impact on popular music, and do more to shape popular electronic music than any other company. Turntablism has origins in the invention of direct-drive turntables. Early belt-drive turntables were unsuitable for turntablism, since they had a slow start-up time, and they were prone to wear-and-tear and breakage, as the belt would break from backspin or scratching. The first direct-drive turntable was invented by Shuichi Obata, an engineer at Matsushita (now Panasonic), based in Osaka, Japan. It eliminated belts, and instead employed a motor to directly drive a platter on which a vinyl record rests. In 1969, Matsushita released it as the SP-10, the first direct-drive turntable on the market, and the first in their influential Technics series of turntables. It was succeeded by the Technics SL-1100 and SL-1200 in the early 1970s, and they were widely adopted by hip hop musicians, with the SL-1200 remaining the most widely used turntable in DJ culture for several decades. In Jamaica, a form of popular electronic music emerged in the 1960s, dub music, rooted in sound system culture. Dub music was pioneered by studio engineers, such as Sylvan Morris, King Tubby, Errol Thompson, Lee "Scratch" Perry, and Scientist, producing reggae-influenced experimental music with electronic sound technology, in recording studios and at sound system parties. Their experiments included forms of tape-based composition comparable to aspects of musique concrète, an emphasis on repetitive rhythmic structures (often stripped of their harmonic elements) comparable to minimalism, the electronic manipulation of spatiality, the sonic electronic manipulation of pre-recorded musical materials from mass media, deejays toasting over pre-recorded music comparable to live electronic music, remixing music, turntablism, and the mixing and scratching of vinyl. Despite the limited electronic equipment available to dub pioneers such as King Tubby and Lee "Scratch" Perry, their experiments in remix culture were musically cutting-edge. King Tubby, for example, was a sound system proprietor and electronics technician, whose small front-room studio in the Waterhouse ghetto of western Kingston was a key site of dub music creation. In the late 1960s, pop and rock musicians, including the Beach Boys and the Beatles, began to use electronic instruments, like the theremin and Mellotron, to supplement and define their sound. The first bands to utilize the Moog synthesizer would be the Doors on Strange Days as well as the Monkees on Pisces, Aquarius, Capricorn & Jones Ltd. In his book Electronic and Experimental Music, Thom Holmes recognises the Beatles' 1966 recording "Tomorrow Never Knows" as the song that "ushered in a new era in the use of electronic music in rock and pop music" due to the band's incorporation of tape loops and reversed and speed-manipulated tape sounds. Also in the late 1960s, the music duos Silver Apples, Beaver and Krause, and experimental rock bands like White Noise, the United States of America, Fifty Foot Hose, and Gong are regarded as pioneers in the electronic rock and electronica genres for their work in melding psychedelic rock with oscillators and synthesizers. The 1969 instrumental "Popcorn" written by Gershon Kingsley for Music To Moog By became a worldwide success due to the 1972 version made by Hot Butter. The Moog synthesizer was brought to the mainstream in 1968 by Switched-On Bach, a bestselling album of Bach compositions arranged for Moog synthesizer by American composer Wendy Carlos. The album achieved critical and commercial success, winning the 1970 Grammy Awards for Best Classical Album, Best Classical Performance – Instrumental Soloist or Soloists (With or Without Orchestra), and Best Engineered Classical Recording. In 1969, David Borden formed the world's first synthesizer ensemble called the Mother Mallard's Portable Masterpiece Company in Ithaca, New York. By the end of the 1960s, the Moog synthesizer took a leading place in the sound of emerging progressive rock with bands including Pink Floyd, Yes, Emerson, Lake & Palmer, and Genesis making them part of their sound. Instrumental prog rock was particularly significant in continental Europe, allowing bands like Kraftwerk, Tangerine Dream, Cluster, Can, Neu!, and Faust to circumvent the language barrier. Their synthesiser-heavy "krautrock", along with the work of Brian Eno (for a time the keyboard player with Roxy Music), would be a major influence on subsequent electronic rock. Ambient dub was pioneered by King Tubby and other Jamaican sound artists, using DJ-inspired ambient electronics, complete with drop-outs, echo, equalization and psychedelic electronic effects. It featured layering techniques and incorporated elements of world music, deep basslines and harmonic sounds. Techniques such as a long echo delay were also used. Other notable artists within the genre include Dreadzone, Higher Intelligence Agency, The Orb, Ott, Loop Guru, Woob and Transglobal Underground. Dub music influenced electronic musical techniques later adopted by hip hop music when Jamaican immigrant DJ Kool Herc in the early 1970s introduced Jamaica's sound system culture and dub music techniques to America. One such technique that became popular in hip hop culture was playing two copies of the same record on two turntables in alternation, extending the b-dancers' favorite section. The turntable eventually went on to become the most visible electronic musical instrument, and occasionally the most virtuosic, in the 1980s and 1990s. Electronic rock was also produced by several Japanese musicians, including Isao Tomita's Electric Samurai: Switched on Rock (1972), which featured Moog synthesizer renditions of contemporary pop and rock songs, and Osamu Kitajima's progressive rock album Benzaiten (1974). The mid-1970s saw the rise of electronic art music musicians such as Jean Michel Jarre, Vangelis, Tomita and Klaus Schulze who were significant influences on the development of new-age music. The hi-tech appeal of these works created for some years the trend of listing the electronic musical equipment employed in the album sleeves, as a distinctive feature. Electronic music began to enter regularly in radio programming and top-sellers charts, as the French band Space with their debut studio album Magic Fly or Jarre with Oxygène. Between 1977 and 1981, Kraftwerk released albums such as Trans-Europe Express, The Man-Machine or Computer World, which influenced subgenres of electronic music. In this era, the sound of rock musicians like Mike Oldfield and The Alan Parsons Project (who is credited the first rock song to feature a digital vocoder in 1975, The Raven) used to be arranged and blended with electronic effects and/or music as well, which became much more prominent in the mid-1980s. Jeff Wayne achieved a long-lasting success with his 1978 electronic rock musical version of The War of the Worlds. Film scores also benefit from the electronic sound. During the 1970s and 1980s, Wendy Carlos composed the score for A Clockwork Orange, The Shining and Tron. In 1977, Gene Page recorded a disco version of the hit theme by John Williams from Steven Spielberg film Close Encounters of the Third Kind. Page's version peaked on the R&B chart at #30. The score of 1978 film Midnight Express composed by Italian synth-pioneer Giorgio Moroder won the Academy Award for Best Original Score in 1979, as did it again in 1981 the score by Vangelis for Chariots of Fire. After the arrival of punk rock, a form of basic electronic rock emerged, increasingly using new digital technology to replace other instruments. The American duo Suicide, who arose from the punk scene in New York, utilized drum machines and synthesizers in a hybrid between electronics and punk on their eponymous 1977 album. Synth-pop pioneering bands which enjoyed success for years included Ultravox with their 1977 track "Hiroshima Mon Amour" on Ha!-Ha!-Ha!, Yellow Magic Orchestra with their self-titled album (1978), The Buggles with their prominent 1979 debut single Video Killed the Radio Star, Gary Numan with his solo debut album The Pleasure Principle and single Cars in 1979, Orchestral Manoeuvres in the Dark with their 1979 single Electricity featured on their eponymous debut album, Depeche Mode with their first single Dreaming of Me recorded in 1980 and released in 1981 album Speak & Spell, A Flock of Seagulls with their 1981 single Talking, New Order with Ceremony in 1981, and The Human League with their 1981 hit Don't You Want Me from their third album Dare. The definition of MIDI and the development of digital audio made the development of purely electronic sounds much easier, with audio engineers, producers and composers exploring frequently the possibilities of virtually every new model of electronic sound equipment launched by manufacturers. Synth-pop sometimes used synthesizers to replace all other instruments, but it was more common that bands had one or more keyboardists in their line-ups along with guitarists, bassists, and/or drummers. These developments led to the growth of synth-pop, which after it was adopted by the New Romantic movement, allowed synthesizers to dominate the pop and rock music of the early 1980s until the style began to fall from popularity in the mid-to-end of the decade. Along with the aforementioned successful pioneers, key acts included Yazoo, Duran Duran, Spandau Ballet, Culture Club, Talk Talk, Japan, and Eurythmics. Synth-pop was taken up across the world, with international hits for acts including Men Without Hats, Trans-X and Lime from Canada, Telex from Belgium, Peter Schilling, Sandra, Modern Talking, Propaganda and Alphaville from Germany, Yello from Switzerland and Azul y Negro from Spain. Also, the synth sound is a key feature of Italo-disco. Some synth-pop bands created futuristic visual styles of themselves to reinforce the idea of electronic sounds were linked primarily with technology, as Americans Devo and Spaniards Aviador Dro. Keyboard synthesizers became so common that even heavy metal rock bands, a genre often regarded as the opposite in aesthetics, sound and lifestyle from that of electronic pop artists by fans of both sides, achieved worldwide success with themes as 1983 Jump by Van Halen and 1986 The Final Countdown by Europe, which feature synths prominently. Elektronmusikstudion (EMS), formerly known as Electroacoustic Music in Sweden, is the Swedish national centre for electronic music and sound art. The research organisation started in 1964 and is based in Stockholm. STEIM is a center for research and development of new musical instruments in the electronic performing arts, located in Amsterdam, Netherlands. STEIM has existed since 1969. It was founded by Misha Mengelberg, Louis Andriessen, Peter Schat, Dick Raaymakers, Jan van Vlijmen, Reinbert de Leeuw, and Konrad Boehmer. This group of Dutch composers had fought for the reformation of Amsterdam's feudal music structures; they insisted on Bruno Maderna's appointment as musical director of the Concertgebouw Orchestra and enforced the first public fundings for experimental and improvised electronic music in the Netherlands. IRCAM in Paris became a major center for computer music research and realization and development of the Sogitec 4X computer system, featuring then revolutionary real-time digital signal processing. Pierre Boulez's Répons (1981) for 24 musicians and 6 soloists used the 4X to transform and route soloists to a loudspeaker system. Barry Vercoe describes one of his experiences with early computer sounds: At IRCAM in Paris in 1982, flutist Larry Beauregard had connected his flute to DiGiugno's 4X audio processor, enabling real-time pitch-following. On a Guggenheim at the time, I extended this concept to real-time score-following with automatic synchronized accompaniment, and over the next two years Larry and I gave numerous demonstrations of the computer as a chamber musician, playing Handel flute sonatas, Boulez's Sonatine for flute and piano and by 1984 my own Synapse II for flute and computer—the first piece ever composed expressly for such a setup. A major challenge was finding the right software constructs to support highly sensitive and responsive accompaniment. All of this was pre-MIDI, but the results were impressive even though heavy doses of tempo rubato would continually surprise my Synthetic Performer. In 1985 we solved the tempo rubato problem by incorporating learning from rehearsals (each time you played this way the machine would get better). We were also now tracking violin, since our brilliant, young flautist had contracted a fatal cancer. Moreover, this version used a new standard called MIDI, and here I was ably assisted by former student Miller Puckette, whose initial concepts for this task he later expanded into a program called MAX. Released in 1970 by Moog Music, the Mini-Moog was among the first widely available, portable, and relatively affordable synthesizers. It became once the most widely used synthesizer at that time in both popular and electronic art music. Patrick Gleeson, playing live with Herbie Hancock at the beginning of the 1970s, pioneered the use of synthesizers in a touring context, where they were subject to stresses the early machines were not designed for. In 1974, the WDR studio in Cologne acquired an EMS Synthi 100 synthesizer, which many composers used to produce notable electronic works—including Rolf Gehlhaar's Fünf deutsche Tänze (1975), Karlheinz Stockhausen's Sirius (1975–1976), and John McGuire's Pulse Music III (1978). Thanks to miniaturization of electronics in the 1970s, by the start of the 1980s keyboard synthesizers, became lighter and affordable, integrating into a single slim unit all the necessary audio synthesis electronics and the piano-style keyboard itself, in sharp contrast with the bulky machinery and "cable spaguetty" employed along with the 1960s and 1970s. First, with analog synthesizers, the trend followed with digital synthesizers and samplers as well (see below). In 1975, the Japanese company Yamaha licensed the algorithms for frequency modulation synthesis (FM synthesis) from John Chowning, who had experimented with it at Stanford University since 1971. Yamaha's engineers began adapting Chowning's algorithm for use in a digital synthesizer, adding improvements such as the "key scaling" method to avoid the introduction of distortion that normally occurred in analog systems during frequency modulation. In 1980, Yamaha eventually released the first FM digital synthesizer, the Yamaha GS-1, but at an expensive price. In 1983, Yamaha introduced the first stand-alone digital synthesizer, the DX7, which also used FM synthesis and would become one of the best-selling synthesizers of all time. The DX7 was known for its recognizable bright tonalities that was partly due to an overachieving sampling rate of 57 kHz. The Korg Poly-800 is a synthesizer released by Korg in 1983. Its initial list price of $795 made it the first fully programmable synthesizer that sold for less than $1000. It had 8-voice polyphony with one Digitally controlled oscillator (DCO) per voice. The Casio CZ-101 was the first and best-selling phase distortion synthesizer in the Casio CZ line. Released in November 1984, it was one of the first (if not the first) fully programmable polyphonic synthesizers that was available for under $500. The Roland D-50 is a digital synthesizer produced by Roland and released in April 1987. Its features include subtractive synthesis, on-board effects, a joystick for data manipulation, and an analogue synthesis-styled layout design. The external Roland PG-1000 (1987–1990) programmer could also be attached to the D-50 for more complex manipulation of its sounds. A sampler is an electronic or digital musical instrument which uses sound recordings (or "samples") of real instrument sounds (e.g., a piano, violin or trumpet), excerpts from recorded songs (e.g., a five-second bass guitar riff from a funk song) or found sounds (e.g., sirens and ocean waves). The samples are loaded or recorded by the user or by a manufacturer. These sounds are then played back using the sampler program itself, a MIDI keyboard, sequencer or another triggering device (e.g., electronic drums) to perform or compose music. Because these samples are usually stored in digital memory, the information can be quickly accessed. A single sample may often be pitch-shifted to different pitches to produce musical scales and chords. Before computer memory-based samplers, musicians used tape replay keyboards, which store recordings on analog tape. When a key is pressed the tape head contacts the moving tape and plays a sound. The Mellotron was the most notable model, used by many groups in the late 1960s and the 1970s, but such systems were expensive and heavy due to the multiple tape mechanisms involved, and the range of the instrument was limited to three octaves at the most. To change sounds a new set of tapes had to be installed in the instrument. The emergence of the digital sampler made sampling far more practical. The earliest digital sampling was done on the EMS Musys system, developed by Peter Grogono (software), David Cockerell (hardware and interfacing), and Peter Zinovieff (system design and operation) at their London (Putney) Studio c. 1969. The first commercially available sampling synthesizer was the Computer Music Melodian by Harry Mendell (1976). First released in 1977–1978, the Synclavier I using FM synthesis, re-licensed from Yamaha, and sold mostly to universities, proved to be highly influential among both electronic music composers and music producers, including Mike Thorne, an early adopter from the commercial world, due to its versatility, its cutting-edge technology, and distinctive sounds. The first polyphonic digital sampling synthesizer was the Australian-produced Fairlight CMI, first available in 1979. These early sampling synthesizers used wavetable sample-based synthesis. In 1980, a group of musicians and music merchants met to standardize an interface that new instruments could use to communicate control instructions with other instruments and computers. This standard was dubbed Musical Instrument Digital Interface (MIDI) and resulted from a collaboration between leading manufacturers, initially Sequential Circuits, Oberheim, Roland—and later, other participants that included Yamaha, Korg, and Kawai. A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized. MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and synchrony, with each device responding according to conditions predetermined by the composer. MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments. Miller Puckette developed graphic signal-processing software for 4X called Max (after Max Mathews) and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background. The early 1980s saw the rise of bass synthesizers, the most influential being the Roland TB-303, a bass synthesizer and sequencer released in late 1981 that later became a fixture in electronic dance music, particularly acid house. One of the first to use it was Charanjit Singh in 1982, though it would not be popularized until Phuture's "Acid Tracks" in 1987. Music sequencers began being used around the mid 20th century, and Tomita's albums in mid-1970s being later examples. In 1978, Yellow Magic Orchestra were using computer-based technology in conjunction with a synthesiser to produce popular music, making their early use of the microprocessor-based Roland MC-8 Microcomposer sequencer. Drum machines, also known as rhythm machines, also began being used around the late-1950s, with a later example being Osamu Kitajima's progressive rock album Benzaiten (1974), which used a rhythm machine along with electronic drums and a synthesizer. In 1977, Ultravox's "Hiroshima Mon Amour" was one of the first singles to use the metronome-like percussion of a Roland TR-77 drum machine. In 1980, Roland Corporation released the TR-808, one of the first and most popular programmable drum machines. The first band to use it was Yellow Magic Orchestra in 1980, and it would later gain widespread popularity with the release of Marvin Gaye's "Sexual Healing" and Afrika Bambaataa's "Planet Rock" in 1982. The TR-808 was a fundamental tool in the later Detroit techno scene of the late 1980s, and was the drum machine of choice for Derrick May and Juan Atkins. The characteristic lo-fi sound of chip music was initially the result of early computer's sound chips and sound cards' technical limitations; however, the sound has since become sought after in its own right. Common cheap popular sound chips of the first home computers of the 1980s include the SID of the Commodore 64 and General Instrument AY series and clones (like the Yamaha YM2149) used in the ZX Spectrum, Amstrad CPC, MSX compatibles and Atari ST models, among others. Synth-pop continued into the late 1980s, with a format that moved closer to dance music, including the work of acts such as British duos Pet Shop Boys, Erasure and The Communards, achieving success along much of the 1990s. The trend has continued to the present day with modern nightclubs worldwide regularly playing electronic dance music (EDM). Today, electronic dance music has radio stations, websites, and publications like Mixmag dedicated solely to the genre. Despite the industry's attempt to create a specific EDM brand, the initialism remains in use as an umbrella term for multiple genres, including dance-pop, house, techno, electro, and trance, as well as their respective subgenres. Moreover, the genre has found commercial and cultural significance in the United States and North America, thanks to the wildly popular big room house/EDM sound that has been incorporated into the U.S. pop music and the rise of large-scale commercial raves such as Electric Daisy Carnival, Tomorrowland and Ultra Music Festival. On the other hand, a broad group of electronic-based music styles intended for listening rather than strictly for dancing became known under the "electronica" umbrella which was also a music scene in the early 1990s in the United Kingdom. According to a 1997 Billboard article, "the union of the club community and independent labels" provided the experimental and trend-setting environment in which electronica acts developed and eventually reached the mainstream, citing American labels such as Astralwerks (the Chemical Brothers, Fatboy Slim, the Future Sound of London, Fluke), Moonshine (DJ Keoki), Sims, and City of Angels (the Crystal Method) for popularizing the latest version of electronic music. The category "indie electronic" (or "indietronica") has been used to refer to a wave of groups with roots in independent rock who embraced electronic elements (such as synthesizers, samplers, drum machines, and computer programs) and influences such as early electronic composition, krautrock, synth-pop, and dance music. Recordings are commonly made on laptops using digital audio workstations. The first wave of indie electronic artists began in the 1990s with acts such as Stereolab (who used vintage gear) and Disco Inferno (who embraced modern sampling technology), and the genre expanded in the 2000s as home recording and software synthesizers came into common use. Other acts included Broadcast, Lali Puna, Múm, the Postal Service, Skeletons, and School of Seven Bells. Independent labels associated with the style include Warp, Morr Music, Sub Pop, and Ghostly International. As computer technology has become more accessible and music software has advanced, interacting with music production technology is now possible using means that bear no relationship to traditional musical performance practices: for instance, laptop performance (laptronica), live coding and Algorave. In general, the term Live PA refers to any live performance of electronic music, whether with laptops, synthesizers, or other devices. Beginning around the year 2000, some software-based virtual studio environments emerged, with products such as Propellerhead's Reason and Ableton Live finding popular appeal. Such tools provide viable and cost-effective alternatives to typical hardware-based production studios, and thanks to advances in microprocessor technology, it is now possible to create high-quality music using little more than a single laptop computer. Such advances have democratized music creation, leading to a massive increase in the amount of home-produced electronic music available to the general public via the internet. Software-based instruments and effect units (so-called "plugins") can be incorporated in a computer-based studio using the VST platform. Some of these instruments are more or less exact replicas of existing hardware (such as the Roland D-50, ARP Odyssey, Yamaha DX7, or Korg M1). Circuit bending is the modification of battery-powered toys and synthesizers to create new unintended sound effects. It was pioneered by Reed Ghazala in the 1960s and Reed coined the name "circuit bending" in 1992. Following the circuit bending culture, musicians also began to build their own modular synthesizers, causing a renewed interest in the early 1960s designs. Eurorack became a popular system.
[ { "paragraph_id": 0, "text": "Error: no inner hatnotes detected (help).", "title": "" }, { "paragraph_id": 1, "text": "Electronic music is a genre of music that employs electronic musical instruments, digital instruments, or circuitry-based music technology in its creation. It includes both music made using electronic and electromechanical means (electroacoustic music). Pure electronic instruments depended entirely on circuitry-based sound generation, for instance using devices such as an electronic oscillator, theremin, or synthesizer. Electromechanical instruments can have mechanical parts such as strings, hammers, and electric elements including magnetic pickups, power amplifiers and loudspeakers. Such electromechanical devices include the telharmonium, Hammond organ, electric piano and the electric guitar.", "title": "" }, { "paragraph_id": 2, "text": "The first electronic musical devices were developed at the end of the 19th century. During the 1920s and 1930s, some electronic instruments were introduced and the first compositions featuring them were written. By the 1940s, magnetic audio tape allowed musicians to tape sounds and then modify them by changing the tape speed or direction, leading to the development of electroacoustic tape music in the 1940s, in Egypt and France. Musique concrète, created in Paris in 1948, was based on editing together recorded fragments of natural and industrial sounds. Music produced solely from electronic generators was first produced in Germany in 1953 by Karlheinz Stockhausen. Electronic music was also created in Japan and the United States beginning in the 1950s and Algorithmic composition with computers was first demonstrated in the same decade.", "title": "" }, { "paragraph_id": 3, "text": "During the 1960s, digital computer music was pioneered, innovation in live electronics took place, and Japanese electronic musical instruments began to influence the music industry. In the early 1970s, Moog synthesizers and drum machines helped popularize synthesized electronic music. The 1970s also saw electronic music begin to have a significant influence on popular music, with the adoption of polyphonic synthesizers, electronic drums, drum machines, and turntables, through the emergence of genres such as disco, krautrock, new wave, synth-pop, hip hop, and EDM. In the early 1980s mass-produced digital synthesizers, such as the Yamaha DX7, became popular, and MIDI (Musical Instrument Digital Interface) was developed. In the same decade, with a greater reliance on synthesizers and the adoption of programmable drum machines, electronic popular music came to the fore. During the 1990s, with the proliferation of increasingly affordable music technology, electronic music production became an established part of popular culture. In Berlin starting in 1989, the Love Parade became the largest street party with over 1 million visitors, inspiring other such popular celebrations of electronic music.", "title": "" }, { "paragraph_id": 4, "text": "Contemporary electronic music includes many varieties and ranges from experimental art music to popular forms such as electronic dance music. Pop electronic music is most recognizable in its 4/4 form and more connected with the mainstream than preceding forms which were popular in niche markets.", "title": "" }, { "paragraph_id": 5, "text": "At the turn of the 20th century, experimentation with emerging electronics led to the first electronic musical instruments. These initial inventions were not sold, but were instead used in demonstrations and public performances. The audiences were presented with reproductions of existing music instead of new compositions for the instruments. While some were considered novelties and produced simple tones, the Telharmonium synthesized the sound of several orchestral instruments with reasonable precision. It achieved viable public interest and made commercial progress into streaming music through telephone networks.", "title": "Origins: late 19th century to early 20th century" }, { "paragraph_id": 6, "text": "Critics of musical conventions at the time saw promise in these developments. Ferruccio Busoni encouraged the composition of microtonal music allowed for by electronic instruments. He predicted the use of machines in future music, writing the influential Sketch of a New Esthetic of Music (1907). Futurists such as Francesco Balilla Pratella and Luigi Russolo began composing music with acoustic noise to evoke the sound of machinery. They predicted expansions in timbre allowed for by electronics in the influential manifesto The Art of Noises (1913).", "title": "Origins: late 19th century to early 20th century" }, { "paragraph_id": 7, "text": "Developments of the vacuum tube led to electronic instruments that were smaller, amplified, and more practical for performance. In particular, the theremin, ondes Martenot and trautonium were commercially produced by the early 1930s.", "title": "Origins: late 19th century to early 20th century" }, { "paragraph_id": 8, "text": "From the late 1920s, the increased practicality of electronic instruments influenced composers such as Joseph Schillinger to adopt them. They were typically used within orchestras, and most composers wrote parts for the theremin that could otherwise be performed with string instruments.", "title": "Origins: late 19th century to early 20th century" }, { "paragraph_id": 9, "text": "Avant-garde composers criticized the predominant use of electronic instruments for conventional purposes. The instruments offered expansions in pitch resources that were exploited by advocates of microtonal music such as Charles Ives, Dimitrios Levidis, Olivier Messiaen and Edgard Varèse. Further, Percy Grainger used the theremin to abandon fixed tonation entirely, while Russian composers such as Gavriil Popov treated it as a source of noise in otherwise-acoustic noise music.", "title": "Origins: late 19th century to early 20th century" }, { "paragraph_id": 10, "text": "Developments in early recording technology paralleled that of electronic instruments. The first means of recording and reproducing audio was invented in the late 19th century with the mechanical phonograph. Record players became a common household item, and by the 1920s composers were using them to play short recordings in performances.", "title": "Origins: late 19th century to early 20th century" }, { "paragraph_id": 11, "text": "The introduction of electrical recording in 1925 was followed by increased experimentation with record players. Paul Hindemith and Ernst Toch composed several pieces in 1930 by layering recordings of instruments and vocals at adjusted speeds. Influenced by these techniques, John Cage composed Imaginary Landscape No. 1 in 1939 by adjusting the speeds of recorded tones.", "title": "Origins: late 19th century to early 20th century" }, { "paragraph_id": 12, "text": "Composers began to experiment with newly developed sound-on-film technology. Recordings could be spliced together to create sound collages, such as those by Tristan Tzara, Kurt Schwitters, Filippo Tommaso Marinetti, Walter Ruttmann and Dziga Vertov. Further, the technology allowed sound to be graphically created and modified. These techniques were used to compose soundtracks for several films in Germany and Russia, in addition to the popular Dr. Jekyll and Mr. Hyde in the United States. Experiments with graphical sound were continued by Norman McLaren from the late 1930s.", "title": "Origins: late 19th century to early 20th century" }, { "paragraph_id": 13, "text": "The first practical audio tape recorder was unveiled in 1935. Improvements to the technology were made using the AC biasing technique, which significantly improved recording fidelity. As early as 1942, test recordings were being made in stereo. Although these developments were initially confined to Germany, recorders and tapes were brought to the United States following the end of World War II. These were the basis for the first commercially produced tape recorder in 1948.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 14, "text": "In 1944, before the use of magnetic tape for compositional purposes, Egyptian composer Halim El-Dabh, while still a student in Cairo, used a cumbersome wire recorder to record sounds of an ancient zaar ceremony. Using facilities at the Middle East Radio studios El-Dabh processed the recorded material using reverberation, echo, voltage controls and re-recording. What resulted is believed to be the earliest tape music composition. The resulting work was entitled The Expression of Zaar and it was presented in 1944 at an art gallery event in Cairo. While his initial experiments in tape-based composition were not widely known outside of Egypt at the time, El-Dabh is also known for his later work in electronic music at the Columbia-Princeton Electronic Music Center in the late 1950s.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 15, "text": "Following his work with Studio d'Essai at Radiodiffusion Française (RDF), during the early 1940s, Pierre Schaeffer is credited with originating the theory and practice of musique concrète. In the late 1940s, experiments in sound-based composition using shellac record players were first conducted by Schaeffer. In 1950, the techniques of musique concrete were expanded when magnetic tape machines were used to explore sound manipulation practices such as speed variation (pitch shift) and tape splicing.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 16, "text": "On 5 October 1948, RDF broadcast Schaeffer's Etude aux chemins de fer. This was the first \"movement\" of Cinq études de bruits, and marked the beginning of studio realizations and musique concrète (or acousmatic art). Schaeffer employed a disc cutting lathe, four turntables, a four-channel mixer, filters, an echo chamber, and a mobile recording unit. Not long after this, Pierre Henry began collaborating with Schaeffer, a partnership that would have profound and lasting effects on the direction of electronic music. Another associate of Schaeffer, Edgard Varèse, began work on Déserts, a work for chamber orchestra and tape. The tape parts were created at Pierre Schaeffer's studio and were later revised at Columbia University.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 17, "text": "In 1950, Schaeffer gave the first public (non-broadcast) concert of musique concrète at the École Normale de Musique de Paris. \"Schaeffer used a PA system, several turntables, and mixers. The performance did not go well, as creating live montages with turntables had never been done before.\" Later that same year, Pierre Henry collaborated with Schaeffer on Symphonie pour un homme seul (1950) the first major work of musique concrete. In Paris in 1951, in what was to become an important worldwide trend, RTF established the first studio for the production of electronic music. Also in 1951, Schaeffer and Henry produced an opera, Orpheus, for concrete sounds and voices.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 18, "text": "By 1951 the work of Schaeffer, composer-percussionist Pierre Henry, and sound engineer Jacques Poullin had received official recognition and The Groupe de Recherches de Musique Concrète, Club d 'Essai de la Radiodiffusion-Télévision Française was established at RTF in Paris, the ancestor of the ORTF.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 19, "text": "Karlheinz Stockhausen worked briefly in Schaeffer's studio in 1952, and afterward for many years at the WDR Cologne's Studio for Electronic Music.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 20, "text": "1954 saw the advent of what would now be considered authentic electric plus acoustic compositions—acoustic instrumentation augmented/accompanied by recordings of manipulated or electronically generated sound. Three major works were premiered that year: Varèse's Déserts, for chamber ensemble and tape sounds, and two works by Otto Luening and Vladimir Ussachevsky: Rhapsodic Variations for the Louisville Symphony and A Poem in Cycles and Bells, both for orchestra and tape. Because he had been working at Schaeffer's studio, the tape part for Varèse's work contains much more concrete sounds than electronic. \"A group made up of wind instruments, percussion and piano alternate with the mutated sounds of factory noises and ship sirens and motors, coming from two loudspeakers.\"", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 21, "text": "At the German premiere of Déserts in Hamburg, which was conducted by Bruno Maderna, the tape controls were operated by Karlheinz Stockhausen. The title Déserts suggested to Varèse not only \"all physical deserts (of sand, sea, snow, of outer space, of empty streets), but also the deserts in the mind of man; not only those stripped aspects of nature that suggest bareness, aloofness, timelessness, but also that remote inner space no telescope can reach, where man is alone, a world of mystery and essential loneliness.\"", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 22, "text": "In Cologne, what would become the most famous electronic music studio in the world, was officially opened at the radio studios of the NWDR in 1953, though it had been in the planning stages as early as 1950 and early compositions were made and broadcast in 1951. The brainchild of Werner Meyer-Eppler, Robert Beyer, and Herbert Eimert (who became its first director), the studio was soon joined by Karlheinz Stockhausen and Gottfried Michael Koenig. In his 1949 thesis Elektronische Klangerzeugung: Elektronische Musik und Synthetische Sprache, Meyer-Eppler conceived the idea to synthesize music entirely from electronically produced signals; in this way, elektronische Musik was sharply differentiated from French musique concrète, which used sounds recorded from acoustical sources.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 23, "text": "In 1953, Stockhausen composed his Studie I, followed in 1954 by Elektronische Studie II—the first electronic piece to be published as a score. In 1955, more experimental and electronic studios began to appear. Notable were the creation of the Studio di fonologia musicale di Radio Milano, a studio at the NHK in Tokyo founded by Toshiro Mayuzumi, and the Philips studio at Eindhoven, the Netherlands, which moved to the University of Utrecht as the Institute of Sonology in 1960.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 24, "text": "\"With Stockhausen and Mauricio Kagel in residence, [Cologne] became a year-round hive of charismatic avante-gardism.\" on two occasions combining electronically generated sounds with relatively conventional orchestras—in Mixtur (1964) and Hymnen, dritte Region mit Orchester (1967). Stockhausen stated that his listeners had told him his electronic music gave them an experience of \"outer space\", sensations of flying, or being in a \"fantastic dream world\".", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 25, "text": "In the United States, electronic music was being created as early as 1939, when John Cage published Imaginary Landscape, No. 1, using two variable-speed turntables, frequency recordings, muted piano, and cymbal, but no electronic means of production. Cage composed five more \"Imaginary Landscapes\" between 1942 and 1952 (one withdrawn), mostly for percussion ensemble, though No. 4 is for twelve radios and No. 5, written in 1952, uses 42 recordings and is to be realized as a magnetic tape. According to Otto Luening, Cage also performed Williams Mix at Donaueschingen in 1954, using eight loudspeakers, three years after his alleged collaboration. Williams Mix was a success at the Donaueschingen Festival, where it made a \"strong impression\".", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 26, "text": "The Music for Magnetic Tape Project was formed by members of the New York School (John Cage, Earle Brown, Christian Wolff, David Tudor, and Morton Feldman), and lasted three years until 1954. Cage wrote of this collaboration: \"In this social darkness, therefore, the work of Earle Brown, Morton Feldman, and Christian Wolff continues to present a brilliant light, for the reason that at the several points of notation, performance, and audition, action is provocative.\"", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 27, "text": "Cage completed Williams Mix in 1953 while working with the Music for Magnetic Tape Project. The group had no permanent facility, and had to rely on borrowed time in commercial sound studios, including the studio of Bebe and Louis Barron.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 28, "text": "In the same year Columbia University purchased its first tape recorder—a professional Ampex machine—to record concerts. Vladimir Ussachevsky, who was on the music faculty of Columbia University, was placed in charge of the device, and almost immediately began experimenting with it.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 29, "text": "Herbert Russcol writes: \"Soon he was intrigued with the new sonorities he could achieve by recording musical instruments and then superimposing them on one another.\" Ussachevsky said later: \"I suddenly realized that the tape recorder could be treated as an instrument of sound transformation.\" On Thursday, 8 May 1952, Ussachevsky presented several demonstrations of tape music/effects that he created at his Composers Forum, in the McMillin Theatre at Columbia University. These included Transposition, Reverberation, Experiment, Composition, and Underwater Valse. In an interview, he stated: \"I presented a few examples of my discovery in a public concert in New York together with other compositions I had written for conventional instruments.\" Otto Luening, who had attended this concert, remarked: \"The equipment at his disposal consisted of an Ampex tape recorder . . . and a simple box-like device designed by the brilliant young engineer, Peter Mauzey, to create feedback, a form of mechanical reverberation. Other equipment was borrowed or purchased with personal funds.\"", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 30, "text": "Just three months later, in August 1952, Ussachevsky traveled to Bennington, Vermont, at Luening's invitation to present his experiments. There, the two collaborated on various pieces. Luening described the event: \"Equipped with earphones and a flute, I began developing my first tape-recorder composition. Both of us were fluent improvisors and the medium fired our imaginations.\" They played some early pieces informally at a party, where \"a number of composers almost solemnly congratulated us saying, 'This is it' ('it' meaning the music of the future).\"", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 31, "text": "Word quickly reached New York City. Oliver Daniel telephoned and invited the pair to \"produce a group of short compositions for the October concert sponsored by the American Composers Alliance and Broadcast Music, Inc., under the direction of Leopold Stokowski at the Museum of Modern Art in New York. After some hesitation, we agreed. . . . Henry Cowell placed his home and studio in Woodstock, New York, at our disposal. With the borrowed equipment in the back of Ussachevsky's car, we left Bennington for Woodstock and stayed two weeks. . . . In late September 1952, the travelling laboratory reached Ussachevsky's living room in New York, where we eventually completed the compositions.\"", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 32, "text": "Two months later, on 28 October, Vladimir Ussachevsky and Otto Luening presented the first Tape Music concert in the United States. The concert included Luening's Fantasy in Space (1952)—\"an impressionistic virtuoso piece\" using manipulated recordings of flute—and Low Speed (1952), an \"exotic composition that took the flute far below its natural range.\" Both pieces were created at the home of Henry Cowell in Woodstock, New York. After several concerts caused a sensation in New York City, Ussachevsky and Luening were invited onto a live broadcast of NBC's Today Show to do an interview demonstration—the first televised electroacoustic performance. Luening described the event: \"I improvised some [flute] sequences for the tape recorder. Ussachevsky then and there put them through electronic transformations.\"", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 33, "text": "The score for Forbidden Planet, by Louis and Bebe Barron, was entirely composed using custom-built electronic circuits and tape recorders in 1956 (but no synthesizers in the modern sense of the word).", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 34, "text": "In 1929, Nikolai Obukhov invented the \"sounding cross\" (la croix sonore), comparable to the principle of the theremin. In the 1930s, Nikolai Ananyev invented \"sonar\", and engineer Alexander Gurov — neoviolena, I. Ilsarov — ilston., A. Rimsky-Korsakov [ru] and A. Ivanov — emiriton [ru]. Composer and inventor Arseny Avraamov was engaged in scientific work on sound synthesis and conducted a number of experiments that would later form the basis of Soviet electro-musical instruments.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 35, "text": "In 1956 Vyacheslav Mescherin created the Ensemble of electro-musical instruments [ru], which used theremins, electric harps, electric organs, the first synthesizer in the USSR \"Ekvodin\", and also created the first Soviet reverb machine. The style in which Meshcherin's ensemble played is known as \"Space age pop\". In 1957, engineer Igor Simonov assembled a working model of a noise recorder (electroeoliphone), with the help of which it was possible to extract various timbres and consonances of a noise nature. In 1958, Evgeny Murzin designed ANS synthesizer, one of the world's first polyphonic musical synthesizers.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 36, "text": "Founded by Murzin in 1966, the Moscow Experimental Electronic Music Studio became the base for a new generation of experimenters – Eduard Artemyev, Alexander Nemtin [ru], Sándor Kallós, Sofia Gubaidulina, Alfred Schnittke, and Vladimir Martynov. By the end of the 1960s, musical groups playing light electronic music appeared in the USSR. At the state level, this music began to be used to attract foreign tourists to the country and for broadcasting to foreign countries. In the mid-1970s, composer Alexander Zatsepin designed an \"orchestrolla\" – a modification of the mellotron.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 37, "text": "The Baltic Soviet Rebublics also had their own pioneers: in Estonian SSR — Sven Grunberg, in Lithuanian SSR — Gedrus Kupriavicius, in Latvian SSR — Opus and Zodiac.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 38, "text": "The world's first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey and Maston Beard. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the Colonel Bogey March, of which no known recordings exist, only the accurate reconstruction. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice. CSIRAC was never recorded, but the music played was accurately reconstructed. The oldest known recordings of computer-generated music were played by the Ferranti Mark 1 computer, a commercial version of the Baby Machine from the University of Manchester in the autumn of 1951. The music program was written by Christopher Strachey.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 39, "text": "The earliest group of electronic musical instruments in Japan, Yamaha Magna Organ was built in 1935. however, after World War II, Japanese composers such as Minao Shibata knew of the development of electronic musical instruments. By the late 1940s, Japanese composers began experimenting with electronic music and institutional sponsorship enabled them to experiment with advanced equipment. Their infusion of Asian music into the emerging genre would eventually support Japan's popularity in the development of music technology several decades later.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 40, "text": "Following the foundation of electronics company Sony in 1946, composers Toru Takemitsu and Minao Shibata independently explored possible uses for electronic technology to produce music. Takemitsu had ideas similar to musique concrète, which he was unaware of, while Shibata foresaw the development of synthesizers and predicted a drastic change in music. Sony began producing popular magnetic tape recorders for government and public use.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 41, "text": "The avant-garde collective Jikken Kōbō (Experimental Workshop), founded in 1950, was offered access to emerging audio technology by Sony. The company hired Toru Takemitsu to demonstrate their tape recorders with compositions and performances of electronic tape music. The first electronic tape pieces by the group were \"Toraware no Onna\" (\"Imprisoned Woman\") and \"Piece B\", composed in 1951 by Kuniharu Akiyama. Many of the electroacoustic tape pieces they produced were used as incidental music for radio, film, and theatre. They also held concerts employing a slide show synchronized with a recorded soundtrack. Composers outside of the Jikken Kōbō, such as Yasushi Akutagawa, Saburo Tominaga, and Shirō Fukai, were also experimenting with radiophonic tape music between 1952 and 1953.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 42, "text": "Musique concrète was introduced to Japan by Toshiro Mayuzumi, who was influenced by a Pierre Schaeffer concert. From 1952, he composed tape music pieces for a comedy film, a radio broadcast, and a radio drama. However, Schaeffer's concept of sound object was not influential among Japanese composers, who were mainly interested in overcoming the restrictions of human performance. This led to several Japanese electroacoustic musicians making use of serialism and twelve-tone techniques, evident in Yoshirō Irino's 1951 dodecaphonic piece \"Concerto da Camera\", in the organization of electronic sounds in Mayuzumi's \"X, Y, Z for Musique Concrète\", and later in Shibata's electronic music by 1956.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 43, "text": "Modelling the NWDR studio in Cologne, established an NHK electronic music studio in Tokyo in 1954, which became one of the world's leading electronic music facilities.The NHK electronic music studio was equipped with technologies such as tone-generating and audio processing equipment, recording and radiophonic equipment, ondes Martenot, Monochord and Melochord, sine-wave oscillators, tape recorders, ring modulators, band-pass filters, and four- and eight-channel mixers. Musicians associated with the studio included Toshiro Mayuzumi, Minao Shibata, Joji Yuasa, Toshi Ichiyanagi, and Toru Takemitsu. The studio's first electronic compositions were completed in 1955, including Mayuzumi's five-minute pieces \"Studie I: Music for Sine Wave by Proportion of Prime Number\", \"Music for Modulated Wave by Proportion of Prime Number\" and \"Invention for Square Wave and Sawtooth Wave\" produced using the studio's various tone-generating capabilities, and Shibata's 20-minute stereo piece \"Musique Concrète for Stereophonic Broadcast\".", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 44, "text": "The impact of computers continued in 1956. Lejaren Hiller and Leonard Isaacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition. \"... Hiller postulated that a computer could be taught the rules of a particular style and then called on to compose accordingly.\" Later developments included the work of Max Mathews at Bell Laboratories, who developed the influential MUSIC I program in 1957, one of the first computer programs to play electronic music. Vocoder technology was also a major development in this early era. In 1956, Stockhausen composed Gesang der Jünglinge, the first major work of the Cologne studio, based on a text from the Book of Daniel. An important technological development of that year was the invention of the Clavivox synthesizer by Raymond Scott with subassembly by Robert Moog.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 45, "text": "In 1957, Kid Baltan (Dick Raaymakers) and Tom Dissevelt released their debut album, Song Of The Second Moon, recorded at the Philips studio in the Netherlands. The public remained interested in the new sounds being created around the world, as can be deduced by the inclusion of Varèse's Poème électronique, which was played over four hundred loudspeakers at the Philips Pavilion of the 1958 Brussels World Fair. That same year, Mauricio Kagel, an Argentine composer, composed Transición II. The work was realized at the WDR studio in Cologne. Two musicians performed on the piano, one in the traditional manner, the other playing on the strings, frame, and case. Two other performers used tape to unite the presentation of live sounds with the future of prerecorded materials from later on and its past of recordings made earlier in the performance.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 46, "text": "In 1958, Columbia-Princeton developed the RCA Mark II Sound Synthesizer, the first programmable synthesizer. Prominent composers such as Vladimir Ussachevsky, Otto Luening, Milton Babbitt, Charles Wuorinen, Halim El-Dabh, Bülent Arel and Mario Davidovsky used the RCA Synthesizer extensively in various compositions. One of the most influential composers associated with the early years of the studio was Egypt's Halim El-Dabh who, after having developed the earliest known electronic tape music in 1944, became more famous for Leiyla and the Poet, a 1959 series of electronic compositions that stood out for its immersion and seamless fusion of electronic and folk music, in contrast to the more mathematical approach used by serial composers of the time such as Babbitt. El-Dabh's Leiyla and the Poet, released as part of the album Columbia-Princeton Electronic Music Center in 1961, would be cited as a strong influence by a number of musicians, ranging from Neil Rolnick, Charles Amirkhanian and Alice Shields to rock musicians Frank Zappa and The West Coast Pop Art Experimental Band.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 47, "text": "Following the emergence of differences within the GRMC (Groupe de Recherche de Musique Concrète) Pierre Henry, Philippe Arthuys, and several of their colleagues, resigned in April 1958. Schaeffer created a new collective, called Groupe de Recherches Musicales (GRM) and set about recruiting new members including Luc Ferrari, Beatriz Ferreyra, François-Bernard Mâche, Iannis Xenakis, Bernard Parmegiani, and Mireille Chamass-Kyrou. Later arrivals included Ivo Malec, Philippe Carson, Romuald Vandelle, Edgardo Canton and François Bayle.", "title": "Development: 1940s to 1950s" }, { "paragraph_id": 48, "text": "These were fertile years for electronic music—not just for academia, but for independent artists as synthesizer technology became more accessible. By this time, a strong community of composers and musicians working with new sounds and instruments was established and growing. 1960 witnessed the composition of Luening's Gargoyles for violin and tape as well as the premiere of Stockhausen's Kontakte for electronic sounds, piano, and percussion. This piece existed in two versions—one for 4-channel tape, and the other for tape with human performers. \"In Kontakte, Stockhausen abandoned traditional musical form based on linear development and dramatic climax. This new approach, which he termed 'moment form', resembles the 'cinematic splice' techniques in early twentieth-century film.\"", "title": "Expansion: 1960s" }, { "paragraph_id": 49, "text": "The theremin had been in use since the 1920s but it attained a degree of popular recognition through its use in science-fiction film soundtrack music in the 1950s (e.g., Bernard Herrmann's classic score for The Day the Earth Stood Still).", "title": "Expansion: 1960s" }, { "paragraph_id": 50, "text": "In the UK in this period, the BBC Radiophonic Workshop (established in 1958) came to prominence, thanks in large measure to their work on the BBC science-fiction series Doctor Who. One of the most influential British electronic artists in this period was Workshop staffer Delia Derbyshire, who is now famous for her 1963 electronic realisation of the iconic Doctor Who theme, composed by Ron Grainer.", "title": "Expansion: 1960s" }, { "paragraph_id": 51, "text": "During the time of the UNESCO fellowship for studies in electronic music (1958) Josef Tal went on a study tour in the US and Canada. He summarized his conclusions in two articles that he submitted to UNESCO. In 1961, he established the Centre for Electronic Music in Israel at The Hebrew University of Jerusalem. In 1962, Hugh Le Caine arrived in Jerusalem to install his Creative Tape Recorder in the centre. In the 1990s Tal conducted, together with Dr. Shlomo Markel, in cooperation with the Technion – Israel Institute of Technology, and the Volkswagen Foundation a research project ('Talmark') aimed at the development of a novel musical notation system for electronic music.", "title": "Expansion: 1960s" }, { "paragraph_id": 52, "text": "Milton Babbitt composed his first electronic work using the synthesizer—his Composition for Synthesizer (1961)—which he created using the RCA synthesizer at the Columbia-Princeton Electronic Music Center.", "title": "Expansion: 1960s" }, { "paragraph_id": 53, "text": "For Babbitt, the RCA synthesizer was a dream come true for three reasons. First, the ability to pinpoint and control every musical element precisely. Second, the time needed to realize his elaborate serial structures were brought within practical reach. Third, the question was no longer \"What are the limits of the human performer?\" but rather \"What are the limits of human hearing?\"", "title": "Expansion: 1960s" }, { "paragraph_id": 54, "text": "The collaborations also occurred across oceans and continents. In 1961, Ussachevsky invited Varèse to the Columbia-Princeton Studio (CPEMC). Upon arrival, Varese embarked upon a revision of Déserts. He was assisted by Mario Davidovsky and Bülent Arel.", "title": "Expansion: 1960s" }, { "paragraph_id": 55, "text": "The intense activity occurring at CPEMC and elsewhere inspired the establishment of the San Francisco Tape Music Center in 1963 by Morton Subotnick, with additional members Pauline Oliveros, Ramon Sender, Anthony Martin, and Terry Riley.", "title": "Expansion: 1960s" }, { "paragraph_id": 56, "text": "Later, the Center moved to Mills College, directed by Pauline Oliveros, and has since been renamed Center for Contemporary Music. Pietro Grossi was an Italian pioneer of computer composition and tape music, who first experimented with electronic techniques in the early sixties. Grossi was a cellist and composer, born in Venice in 1917. He founded the S 2F M (Studio de Fonologia Musicale di Firenze) in 1963 to experiment with electronic sound and composition.", "title": "Expansion: 1960s" }, { "paragraph_id": 57, "text": "Simultaneously in San Francisco, composer Stan Shaff and equipment designer Doug McEachern, presented the first \"Audium\" concert at San Francisco State College (1962), followed by work at the San Francisco Museum of Modern Art (1963), conceived of as in time, controlled movement of sound in space. Twelve speakers surrounded the audience, four speakers were mounted on a rotating, mobile-like construction above. In an SFMOMA performance the following year (1964), San Francisco Chronicle music critic Alfred Frankenstein commented, \"the possibilities of the space-sound continuum have seldom been so extensively explored\". In 1967, the first Audium, a \"sound-space continuum\" opened, holding weekly performances through 1970. In 1975, enabled by seed money from the National Endowment for the Arts, a new Audium opened, designed floor to ceiling for spatial sound composition and performance. \"In contrast, there are composers who manipulated sound space by locating multiple speakers at various locations in a performance space and then switching or panning the sound between the sources. In this approach, the composition of spatial manipulation is dependent on the location of the speakers and usually exploits the acoustical properties of the enclosure. Examples include Varese's Poeme Electronique (tape music performed in the Philips Pavilion of the 1958 World Fair, Brussels) and Stan Schaff's Audium installation, currently active in San Francisco.\" Through weekly programs (over 4,500 in 40 years), Shaff \"sculpts\" sound, performing now-digitized spatial works live through 176 speakers.", "title": "Expansion: 1960s" }, { "paragraph_id": 58, "text": "Jean-Jacques Perrey experimented with Schaeffer's techniques on tape loops and was among the first to use the recently released Moog synthesizer developed by Robert Moog. With this instrument he composed some works with Gershon Kingsley and solo. A well-known example of the use of Moog's full-sized Moog modular synthesizer is the 1968 Switched-On Bach album by Wendy Carlos, which triggered a craze for synthesizer music. In 1969 David Tudor brought a Moog modular synthesizer and Ampex tape machines to the National Institute of Design in Ahmedabad with the support of the Sarabhai family, forming the foundation of India's first electronic music studio. Here a group of composers Jinraj Joshipura, Gita Sarabhai, SC Sharma, IS Mathur and Atul Desai developed experimental sound compositions between 1969 and 1973.", "title": "Expansion: 1960s" }, { "paragraph_id": 59, "text": "Musical melodies were first generated by the computer CSIRAC in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were obviously speculative). Research has shown that people speculated about computers playing music, possibly because computers would make noises, but there is no evidence that they actually did it.", "title": "Expansion: 1960s" }, { "paragraph_id": 60, "text": "The world's first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey and Maston Beard in the 1950s. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the \"Colonel Bogey March\" of which no known recordings exist. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice which is current computer-music practice.", "title": "Expansion: 1960s" }, { "paragraph_id": 61, "text": "The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark I, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, \"Ba, Ba Black Sheep\", and \"In the Mood\" and this is recognised as the earliest recording of a computer to play music. This recording can be heard at this Manchester University site. Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on SoundCloud.", "title": "Expansion: 1960s" }, { "paragraph_id": 62, "text": "The late 1950s, 1960s, and 1970s also saw the development of large mainframe computer synthesis. Starting in 1957, Max Mathews of Bell Labs developed the MUSIC programs, culminating in MUSIC V, a direct digital synthesis language. Laurie Spiegel developed the algorithmic musical composition software \"Music Mouse\" (1986) for Macintosh, Amiga, and Atari computers.", "title": "Expansion: 1960s" }, { "paragraph_id": 63, "text": "An important new development was the advent of computers to compose music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a composing method that uses mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962), Morsima-Amorsima, ST/10, and Atrées. He developed the computer system UPIC for translating graphical images into musical results and composed Mycènes Alpha (1978) with it.", "title": "Expansion: 1960s" }, { "paragraph_id": 64, "text": "In Europe in 1964, Karlheinz Stockhausen composed Mikrophonie I for tam-tam, hand-held microphones, filters, and potentiometers, and Mixtur for orchestra, four sine-wave generators, and four ring modulators. In 1965 he composed Mikrophonie II for choir, Hammond organ, and ring modulators.", "title": "Expansion: 1960s" }, { "paragraph_id": 65, "text": "In 1966–1967, Reed Ghazala discovered and began to teach \"circuit bending\"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage's aleatoric music [sic] concept.", "title": "Expansion: 1960s" }, { "paragraph_id": 66, "text": "Cosey Fanni Tutti's performance art and musical career explored the concept of 'acceptable' music and she went on to explore the use of sound as a means of desire or discomfort.", "title": "Expansion: 1960s" }, { "paragraph_id": 67, "text": "Wendy Carlos performed selections from her album Switched-On Bach on stage with a synthesizer with the St. Louis Symphony Orchestra; another live performance was with Kurzweil Baroque Ensemble for \"Bach at the Beacon\" in 1997. In June 2018, Suzanne Ciani released LIVE Quadraphonic, a live album documenting her first solo performance on a Buchla synthesizer in 40 years. It was one of the first quadraphonic vinyl releases in over 30 years.", "title": "Expansion: 1960s" }, { "paragraph_id": 68, "text": "In the 1950s, Japanese electronic musical instruments began influencing the international music industry. Ikutaro Kakehashi, who founded Ace Tone in 1960, developed his own version of electronic percussion that had been already popular on the overseas electronic organ. At the 1964 NAMM Show, he revealed it as the R-1 Rhythm Ace, a hand-operated percussion device that played electronic drum sounds manually as the user pushed buttons, in a similar fashion to modern electronic drum pads.", "title": "Expansion: 1960s" }, { "paragraph_id": 69, "text": "In 1963, Korg released the Donca-Matic DA-20, an electro-mechanical drum machine. In 1965, Nippon Columbia patented a fully electronic drum machine. Korg released the Donca-Matic DC-11 electronic drum machine in 1966, which they followed with the Korg Mini Pops, which was developed as an option for the Yamaha Electone electric organ. Korg's Stageman and Mini Pops series were notable for \"natural metallic percussion\" sounds and incorporating controls for drum \"breaks and fill-ins.\"", "title": "Expansion: 1960s" }, { "paragraph_id": 70, "text": "In 1967, Ace Tone founder Ikutaro Kakehashi patented a preset rhythm-pattern generator using diode matrix circuit similar to the Seeburg's prior U.S. Patent 3,358,068 filed in 1964 (See Drum machine#History), which he released as the FR-1 Rhythm Ace drum machine the same year. It offered 16 preset patterns, and four buttons to manually play each instrument sound (cymbal, claves, cowbell and bass drum). The rhythm patterns could also be cascaded together by pushing multiple rhythm buttons simultaneously, and the possible combination of rhythm patterns were more than a hundred. Ace Tone's Rhythm Ace drum machines found their way into popular music from the late 1960s, followed by Korg drum machines in the 1970s. Kakehashi later left Ace Tone and founded Roland Corporation in 1972, with Roland synthesizers and drum machines becoming highly influential for the next several decades. The company would go on to have a big impact on popular music, and do more to shape popular electronic music than any other company.", "title": "Expansion: 1960s" }, { "paragraph_id": 71, "text": "Turntablism has origins in the invention of direct-drive turntables. Early belt-drive turntables were unsuitable for turntablism, since they had a slow start-up time, and they were prone to wear-and-tear and breakage, as the belt would break from backspin or scratching. The first direct-drive turntable was invented by Shuichi Obata, an engineer at Matsushita (now Panasonic), based in Osaka, Japan. It eliminated belts, and instead employed a motor to directly drive a platter on which a vinyl record rests. In 1969, Matsushita released it as the SP-10, the first direct-drive turntable on the market, and the first in their influential Technics series of turntables. It was succeeded by the Technics SL-1100 and SL-1200 in the early 1970s, and they were widely adopted by hip hop musicians, with the SL-1200 remaining the most widely used turntable in DJ culture for several decades.", "title": "Expansion: 1960s" }, { "paragraph_id": 72, "text": "In Jamaica, a form of popular electronic music emerged in the 1960s, dub music, rooted in sound system culture. Dub music was pioneered by studio engineers, such as Sylvan Morris, King Tubby, Errol Thompson, Lee \"Scratch\" Perry, and Scientist, producing reggae-influenced experimental music with electronic sound technology, in recording studios and at sound system parties. Their experiments included forms of tape-based composition comparable to aspects of musique concrète, an emphasis on repetitive rhythmic structures (often stripped of their harmonic elements) comparable to minimalism, the electronic manipulation of spatiality, the sonic electronic manipulation of pre-recorded musical materials from mass media, deejays toasting over pre-recorded music comparable to live electronic music, remixing music, turntablism, and the mixing and scratching of vinyl.", "title": "Expansion: 1960s" }, { "paragraph_id": 73, "text": "Despite the limited electronic equipment available to dub pioneers such as King Tubby and Lee \"Scratch\" Perry, their experiments in remix culture were musically cutting-edge. King Tubby, for example, was a sound system proprietor and electronics technician, whose small front-room studio in the Waterhouse ghetto of western Kingston was a key site of dub music creation.", "title": "Expansion: 1960s" }, { "paragraph_id": 74, "text": "In the late 1960s, pop and rock musicians, including the Beach Boys and the Beatles, began to use electronic instruments, like the theremin and Mellotron, to supplement and define their sound. The first bands to utilize the Moog synthesizer would be the Doors on Strange Days as well as the Monkees on Pisces, Aquarius, Capricorn & Jones Ltd. In his book Electronic and Experimental Music, Thom Holmes recognises the Beatles' 1966 recording \"Tomorrow Never Knows\" as the song that \"ushered in a new era in the use of electronic music in rock and pop music\" due to the band's incorporation of tape loops and reversed and speed-manipulated tape sounds.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 75, "text": "Also in the late 1960s, the music duos Silver Apples, Beaver and Krause, and experimental rock bands like White Noise, the United States of America, Fifty Foot Hose, and Gong are regarded as pioneers in the electronic rock and electronica genres for their work in melding psychedelic rock with oscillators and synthesizers. The 1969 instrumental \"Popcorn\" written by Gershon Kingsley for Music To Moog By became a worldwide success due to the 1972 version made by Hot Butter.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 76, "text": "The Moog synthesizer was brought to the mainstream in 1968 by Switched-On Bach, a bestselling album of Bach compositions arranged for Moog synthesizer by American composer Wendy Carlos. The album achieved critical and commercial success, winning the 1970 Grammy Awards for Best Classical Album, Best Classical Performance – Instrumental Soloist or Soloists (With or Without Orchestra), and Best Engineered Classical Recording.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 77, "text": "In 1969, David Borden formed the world's first synthesizer ensemble called the Mother Mallard's Portable Masterpiece Company in Ithaca, New York.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 78, "text": "By the end of the 1960s, the Moog synthesizer took a leading place in the sound of emerging progressive rock with bands including Pink Floyd, Yes, Emerson, Lake & Palmer, and Genesis making them part of their sound. Instrumental prog rock was particularly significant in continental Europe, allowing bands like Kraftwerk, Tangerine Dream, Cluster, Can, Neu!, and Faust to circumvent the language barrier. Their synthesiser-heavy \"krautrock\", along with the work of Brian Eno (for a time the keyboard player with Roxy Music), would be a major influence on subsequent electronic rock.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 79, "text": "Ambient dub was pioneered by King Tubby and other Jamaican sound artists, using DJ-inspired ambient electronics, complete with drop-outs, echo, equalization and psychedelic electronic effects. It featured layering techniques and incorporated elements of world music, deep basslines and harmonic sounds. Techniques such as a long echo delay were also used. Other notable artists within the genre include Dreadzone, Higher Intelligence Agency, The Orb, Ott, Loop Guru, Woob and Transglobal Underground.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 80, "text": "Dub music influenced electronic musical techniques later adopted by hip hop music when Jamaican immigrant DJ Kool Herc in the early 1970s introduced Jamaica's sound system culture and dub music techniques to America. One such technique that became popular in hip hop culture was playing two copies of the same record on two turntables in alternation, extending the b-dancers' favorite section. The turntable eventually went on to become the most visible electronic musical instrument, and occasionally the most virtuosic, in the 1980s and 1990s.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 81, "text": "Electronic rock was also produced by several Japanese musicians, including Isao Tomita's Electric Samurai: Switched on Rock (1972), which featured Moog synthesizer renditions of contemporary pop and rock songs, and Osamu Kitajima's progressive rock album Benzaiten (1974). The mid-1970s saw the rise of electronic art music musicians such as Jean Michel Jarre, Vangelis, Tomita and Klaus Schulze who were significant influences on the development of new-age music. The hi-tech appeal of these works created for some years the trend of listing the electronic musical equipment employed in the album sleeves, as a distinctive feature. Electronic music began to enter regularly in radio programming and top-sellers charts, as the French band Space with their debut studio album Magic Fly or Jarre with Oxygène. Between 1977 and 1981, Kraftwerk released albums such as Trans-Europe Express, The Man-Machine or Computer World, which influenced subgenres of electronic music.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 82, "text": "In this era, the sound of rock musicians like Mike Oldfield and The Alan Parsons Project (who is credited the first rock song to feature a digital vocoder in 1975, The Raven) used to be arranged and blended with electronic effects and/or music as well, which became much more prominent in the mid-1980s. Jeff Wayne achieved a long-lasting success with his 1978 electronic rock musical version of The War of the Worlds.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 83, "text": "Film scores also benefit from the electronic sound. During the 1970s and 1980s, Wendy Carlos composed the score for A Clockwork Orange, The Shining and Tron. In 1977, Gene Page recorded a disco version of the hit theme by John Williams from Steven Spielberg film Close Encounters of the Third Kind. Page's version peaked on the R&B chart at #30. The score of 1978 film Midnight Express composed by Italian synth-pioneer Giorgio Moroder won the Academy Award for Best Original Score in 1979, as did it again in 1981 the score by Vangelis for Chariots of Fire. After the arrival of punk rock, a form of basic electronic rock emerged, increasingly using new digital technology to replace other instruments. The American duo Suicide, who arose from the punk scene in New York, utilized drum machines and synthesizers in a hybrid between electronics and punk on their eponymous 1977 album.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 84, "text": "Synth-pop pioneering bands which enjoyed success for years included Ultravox with their 1977 track \"Hiroshima Mon Amour\" on Ha!-Ha!-Ha!, Yellow Magic Orchestra with their self-titled album (1978), The Buggles with their prominent 1979 debut single Video Killed the Radio Star, Gary Numan with his solo debut album The Pleasure Principle and single Cars in 1979, Orchestral Manoeuvres in the Dark with their 1979 single Electricity featured on their eponymous debut album, Depeche Mode with their first single Dreaming of Me recorded in 1980 and released in 1981 album Speak & Spell, A Flock of Seagulls with their 1981 single Talking, New Order with Ceremony in 1981, and The Human League with their 1981 hit Don't You Want Me from their third album Dare.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 85, "text": "The definition of MIDI and the development of digital audio made the development of purely electronic sounds much easier, with audio engineers, producers and composers exploring frequently the possibilities of virtually every new model of electronic sound equipment launched by manufacturers. Synth-pop sometimes used synthesizers to replace all other instruments, but it was more common that bands had one or more keyboardists in their line-ups along with guitarists, bassists, and/or drummers. These developments led to the growth of synth-pop, which after it was adopted by the New Romantic movement, allowed synthesizers to dominate the pop and rock music of the early 1980s until the style began to fall from popularity in the mid-to-end of the decade. Along with the aforementioned successful pioneers, key acts included Yazoo, Duran Duran, Spandau Ballet, Culture Club, Talk Talk, Japan, and Eurythmics.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 86, "text": "Synth-pop was taken up across the world, with international hits for acts including Men Without Hats, Trans-X and Lime from Canada, Telex from Belgium, Peter Schilling, Sandra, Modern Talking, Propaganda and Alphaville from Germany, Yello from Switzerland and Azul y Negro from Spain. Also, the synth sound is a key feature of Italo-disco.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 87, "text": "Some synth-pop bands created futuristic visual styles of themselves to reinforce the idea of electronic sounds were linked primarily with technology, as Americans Devo and Spaniards Aviador Dro.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 88, "text": "Keyboard synthesizers became so common that even heavy metal rock bands, a genre often regarded as the opposite in aesthetics, sound and lifestyle from that of electronic pop artists by fans of both sides, achieved worldwide success with themes as 1983 Jump by Van Halen and 1986 The Final Countdown by Europe, which feature synths prominently.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 89, "text": "Elektronmusikstudion (EMS), formerly known as Electroacoustic Music in Sweden, is the Swedish national centre for electronic music and sound art. The research organisation started in 1964 and is based in Stockholm.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 90, "text": "STEIM is a center for research and development of new musical instruments in the electronic performing arts, located in Amsterdam, Netherlands. STEIM has existed since 1969. It was founded by Misha Mengelberg, Louis Andriessen, Peter Schat, Dick Raaymakers, Jan van Vlijmen, Reinbert de Leeuw, and Konrad Boehmer. This group of Dutch composers had fought for the reformation of Amsterdam's feudal music structures; they insisted on Bruno Maderna's appointment as musical director of the Concertgebouw Orchestra and enforced the first public fundings for experimental and improvised electronic music in the Netherlands.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 91, "text": "IRCAM in Paris became a major center for computer music research and realization and development of the Sogitec 4X computer system, featuring then revolutionary real-time digital signal processing. Pierre Boulez's Répons (1981) for 24 musicians and 6 soloists used the 4X to transform and route soloists to a loudspeaker system.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 92, "text": "Barry Vercoe describes one of his experiences with early computer sounds:", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 93, "text": "At IRCAM in Paris in 1982, flutist Larry Beauregard had connected his flute to DiGiugno's 4X audio processor, enabling real-time pitch-following. On a Guggenheim at the time, I extended this concept to real-time score-following with automatic synchronized accompaniment, and over the next two years Larry and I gave numerous demonstrations of the computer as a chamber musician, playing Handel flute sonatas, Boulez's Sonatine for flute and piano and by 1984 my own Synapse II for flute and computer—the first piece ever composed expressly for such a setup. A major challenge was finding the right software constructs to support highly sensitive and responsive accompaniment. All of this was pre-MIDI, but the results were impressive even though heavy doses of tempo rubato would continually surprise my Synthetic Performer. In 1985 we solved the tempo rubato problem by incorporating learning from rehearsals (each time you played this way the machine would get better). We were also now tracking violin, since our brilliant, young flautist had contracted a fatal cancer. Moreover, this version used a new standard called MIDI, and here I was ably assisted by former student Miller Puckette, whose initial concepts for this task he later expanded into a program called MAX.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 94, "text": "Released in 1970 by Moog Music, the Mini-Moog was among the first widely available, portable, and relatively affordable synthesizers. It became once the most widely used synthesizer at that time in both popular and electronic art music. Patrick Gleeson, playing live with Herbie Hancock at the beginning of the 1970s, pioneered the use of synthesizers in a touring context, where they were subject to stresses the early machines were not designed for.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 95, "text": "In 1974, the WDR studio in Cologne acquired an EMS Synthi 100 synthesizer, which many composers used to produce notable electronic works—including Rolf Gehlhaar's Fünf deutsche Tänze (1975), Karlheinz Stockhausen's Sirius (1975–1976), and John McGuire's Pulse Music III (1978).", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 96, "text": "Thanks to miniaturization of electronics in the 1970s, by the start of the 1980s keyboard synthesizers, became lighter and affordable, integrating into a single slim unit all the necessary audio synthesis electronics and the piano-style keyboard itself, in sharp contrast with the bulky machinery and \"cable spaguetty\" employed along with the 1960s and 1970s. First, with analog synthesizers, the trend followed with digital synthesizers and samplers as well (see below).", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 97, "text": "In 1975, the Japanese company Yamaha licensed the algorithms for frequency modulation synthesis (FM synthesis) from John Chowning, who had experimented with it at Stanford University since 1971. Yamaha's engineers began adapting Chowning's algorithm for use in a digital synthesizer, adding improvements such as the \"key scaling\" method to avoid the introduction of distortion that normally occurred in analog systems during frequency modulation.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 98, "text": "In 1980, Yamaha eventually released the first FM digital synthesizer, the Yamaha GS-1, but at an expensive price. In 1983, Yamaha introduced the first stand-alone digital synthesizer, the DX7, which also used FM synthesis and would become one of the best-selling synthesizers of all time. The DX7 was known for its recognizable bright tonalities that was partly due to an overachieving sampling rate of 57 kHz.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 99, "text": "The Korg Poly-800 is a synthesizer released by Korg in 1983. Its initial list price of $795 made it the first fully programmable synthesizer that sold for less than $1000. It had 8-voice polyphony with one Digitally controlled oscillator (DCO) per voice.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 100, "text": "The Casio CZ-101 was the first and best-selling phase distortion synthesizer in the Casio CZ line. Released in November 1984, it was one of the first (if not the first) fully programmable polyphonic synthesizers that was available for under $500.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 101, "text": "The Roland D-50 is a digital synthesizer produced by Roland and released in April 1987. Its features include subtractive synthesis, on-board effects, a joystick for data manipulation, and an analogue synthesis-styled layout design. The external Roland PG-1000 (1987–1990) programmer could also be attached to the D-50 for more complex manipulation of its sounds.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 102, "text": "A sampler is an electronic or digital musical instrument which uses sound recordings (or \"samples\") of real instrument sounds (e.g., a piano, violin or trumpet), excerpts from recorded songs (e.g., a five-second bass guitar riff from a funk song) or found sounds (e.g., sirens and ocean waves). The samples are loaded or recorded by the user or by a manufacturer. These sounds are then played back using the sampler program itself, a MIDI keyboard, sequencer or another triggering device (e.g., electronic drums) to perform or compose music. Because these samples are usually stored in digital memory, the information can be quickly accessed. A single sample may often be pitch-shifted to different pitches to produce musical scales and chords.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 103, "text": "Before computer memory-based samplers, musicians used tape replay keyboards, which store recordings on analog tape. When a key is pressed the tape head contacts the moving tape and plays a sound. The Mellotron was the most notable model, used by many groups in the late 1960s and the 1970s, but such systems were expensive and heavy due to the multiple tape mechanisms involved, and the range of the instrument was limited to three octaves at the most. To change sounds a new set of tapes had to be installed in the instrument. The emergence of the digital sampler made sampling far more practical.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 104, "text": "The earliest digital sampling was done on the EMS Musys system, developed by Peter Grogono (software), David Cockerell (hardware and interfacing), and Peter Zinovieff (system design and operation) at their London (Putney) Studio c. 1969.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 105, "text": "The first commercially available sampling synthesizer was the Computer Music Melodian by Harry Mendell (1976).", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 106, "text": "First released in 1977–1978, the Synclavier I using FM synthesis, re-licensed from Yamaha, and sold mostly to universities, proved to be highly influential among both electronic music composers and music producers, including Mike Thorne, an early adopter from the commercial world, due to its versatility, its cutting-edge technology, and distinctive sounds.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 107, "text": "The first polyphonic digital sampling synthesizer was the Australian-produced Fairlight CMI, first available in 1979. These early sampling synthesizers used wavetable sample-based synthesis.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 108, "text": "In 1980, a group of musicians and music merchants met to standardize an interface that new instruments could use to communicate control instructions with other instruments and computers. This standard was dubbed Musical Instrument Digital Interface (MIDI) and resulted from a collaboration between leading manufacturers, initially Sequential Circuits, Oberheim, Roland—and later, other participants that included Yamaha, Korg, and Kawai. A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 109, "text": "MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and synchrony, with each device responding according to conditions predetermined by the composer.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 110, "text": "MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 111, "text": "Miller Puckette developed graphic signal-processing software for 4X called Max (after Max Mathews) and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 112, "text": "The early 1980s saw the rise of bass synthesizers, the most influential being the Roland TB-303, a bass synthesizer and sequencer released in late 1981 that later became a fixture in electronic dance music, particularly acid house. One of the first to use it was Charanjit Singh in 1982, though it would not be popularized until Phuture's \"Acid Tracks\" in 1987. Music sequencers began being used around the mid 20th century, and Tomita's albums in mid-1970s being later examples. In 1978, Yellow Magic Orchestra were using computer-based technology in conjunction with a synthesiser to produce popular music, making their early use of the microprocessor-based Roland MC-8 Microcomposer sequencer.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 113, "text": "Drum machines, also known as rhythm machines, also began being used around the late-1950s, with a later example being Osamu Kitajima's progressive rock album Benzaiten (1974), which used a rhythm machine along with electronic drums and a synthesizer. In 1977, Ultravox's \"Hiroshima Mon Amour\" was one of the first singles to use the metronome-like percussion of a Roland TR-77 drum machine. In 1980, Roland Corporation released the TR-808, one of the first and most popular programmable drum machines. The first band to use it was Yellow Magic Orchestra in 1980, and it would later gain widespread popularity with the release of Marvin Gaye's \"Sexual Healing\" and Afrika Bambaataa's \"Planet Rock\" in 1982. The TR-808 was a fundamental tool in the later Detroit techno scene of the late 1980s, and was the drum machine of choice for Derrick May and Juan Atkins.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 114, "text": "The characteristic lo-fi sound of chip music was initially the result of early computer's sound chips and sound cards' technical limitations; however, the sound has since become sought after in its own right.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 115, "text": "Common cheap popular sound chips of the first home computers of the 1980s include the SID of the Commodore 64 and General Instrument AY series and clones (like the Yamaha YM2149) used in the ZX Spectrum, Amstrad CPC, MSX compatibles and Atari ST models, among others.", "title": "Late 1960s to early 1980s" }, { "paragraph_id": 116, "text": "Synth-pop continued into the late 1980s, with a format that moved closer to dance music, including the work of acts such as British duos Pet Shop Boys, Erasure and The Communards, achieving success along much of the 1990s.", "title": "Late 1980s to 1990s" }, { "paragraph_id": 117, "text": "The trend has continued to the present day with modern nightclubs worldwide regularly playing electronic dance music (EDM). Today, electronic dance music has radio stations, websites, and publications like Mixmag dedicated solely to the genre. Despite the industry's attempt to create a specific EDM brand, the initialism remains in use as an umbrella term for multiple genres, including dance-pop, house, techno, electro, and trance, as well as their respective subgenres. Moreover, the genre has found commercial and cultural significance in the United States and North America, thanks to the wildly popular big room house/EDM sound that has been incorporated into the U.S. pop music and the rise of large-scale commercial raves such as Electric Daisy Carnival, Tomorrowland and Ultra Music Festival.", "title": "Late 1980s to 1990s" }, { "paragraph_id": 118, "text": "On the other hand, a broad group of electronic-based music styles intended for listening rather than strictly for dancing became known under the \"electronica\" umbrella which was also a music scene in the early 1990s in the United Kingdom. According to a 1997 Billboard article, \"the union of the club community and independent labels\" provided the experimental and trend-setting environment in which electronica acts developed and eventually reached the mainstream, citing American labels such as Astralwerks (the Chemical Brothers, Fatboy Slim, the Future Sound of London, Fluke), Moonshine (DJ Keoki), Sims, and City of Angels (the Crystal Method) for popularizing the latest version of electronic music.", "title": "Late 1980s to 1990s" }, { "paragraph_id": 119, "text": "The category \"indie electronic\" (or \"indietronica\") has been used to refer to a wave of groups with roots in independent rock who embraced electronic elements (such as synthesizers, samplers, drum machines, and computer programs) and influences such as early electronic composition, krautrock, synth-pop, and dance music. Recordings are commonly made on laptops using digital audio workstations.", "title": "Late 1980s to 1990s" }, { "paragraph_id": 120, "text": "The first wave of indie electronic artists began in the 1990s with acts such as Stereolab (who used vintage gear) and Disco Inferno (who embraced modern sampling technology), and the genre expanded in the 2000s as home recording and software synthesizers came into common use. Other acts included Broadcast, Lali Puna, Múm, the Postal Service, Skeletons, and School of Seven Bells. Independent labels associated with the style include Warp, Morr Music, Sub Pop, and Ghostly International.", "title": "Late 1980s to 1990s" }, { "paragraph_id": 121, "text": "As computer technology has become more accessible and music software has advanced, interacting with music production technology is now possible using means that bear no relationship to traditional musical performance practices: for instance, laptop performance (laptronica), live coding and Algorave. In general, the term Live PA refers to any live performance of electronic music, whether with laptops, synthesizers, or other devices.", "title": "2000s and 2010s" }, { "paragraph_id": 122, "text": "Beginning around the year 2000, some software-based virtual studio environments emerged, with products such as Propellerhead's Reason and Ableton Live finding popular appeal. Such tools provide viable and cost-effective alternatives to typical hardware-based production studios, and thanks to advances in microprocessor technology, it is now possible to create high-quality music using little more than a single laptop computer. Such advances have democratized music creation, leading to a massive increase in the amount of home-produced electronic music available to the general public via the internet. Software-based instruments and effect units (so-called \"plugins\") can be incorporated in a computer-based studio using the VST platform. Some of these instruments are more or less exact replicas of existing hardware (such as the Roland D-50, ARP Odyssey, Yamaha DX7, or Korg M1).", "title": "2000s and 2010s" }, { "paragraph_id": 123, "text": "Circuit bending is the modification of battery-powered toys and synthesizers to create new unintended sound effects. It was pioneered by Reed Ghazala in the 1960s and Reed coined the name \"circuit bending\" in 1992.", "title": "2000s and 2010s" }, { "paragraph_id": 124, "text": "Following the circuit bending culture, musicians also began to build their own modular synthesizers, causing a renewed interest in the early 1960s designs. Eurorack became a popular system.", "title": "2000s and 2010s" } ]
Error: no inner hatnotes detected (help). Electronic music is a genre of music that employs electronic musical instruments, digital instruments, or circuitry-based music technology in its creation. It includes both music made using electronic and electromechanical means. Pure electronic instruments depended entirely on circuitry-based sound generation, for instance using devices such as an electronic oscillator, theremin, or synthesizer. Electromechanical instruments can have mechanical parts such as strings, hammers, and electric elements including magnetic pickups, power amplifiers and loudspeakers. Such electromechanical devices include the telharmonium, Hammond organ, electric piano and the electric guitar. The first electronic musical devices were developed at the end of the 19th century. During the 1920s and 1930s, some electronic instruments were introduced and the first compositions featuring them were written. By the 1940s, magnetic audio tape allowed musicians to tape sounds and then modify them by changing the tape speed or direction, leading to the development of electroacoustic tape music in the 1940s, in Egypt and France. Musique concrète, created in Paris in 1948, was based on editing together recorded fragments of natural and industrial sounds. Music produced solely from electronic generators was first produced in Germany in 1953 by Karlheinz Stockhausen. Electronic music was also created in Japan and the United States beginning in the 1950s and Algorithmic composition with computers was first demonstrated in the same decade. During the 1960s, digital computer music was pioneered, innovation in live electronics took place, and Japanese electronic musical instruments began to influence the music industry. In the early 1970s, Moog synthesizers and drum machines helped popularize synthesized electronic music. The 1970s also saw electronic music begin to have a significant influence on popular music, with the adoption of polyphonic synthesizers, electronic drums, drum machines, and turntables, through the emergence of genres such as disco, krautrock, new wave, synth-pop, hip hop, and EDM. In the early 1980s mass-produced digital synthesizers, such as the Yamaha DX7, became popular, and MIDI was developed. In the same decade, with a greater reliance on synthesizers and the adoption of programmable drum machines, electronic popular music came to the fore. During the 1990s, with the proliferation of increasingly affordable music technology, electronic music production became an established part of popular culture. In Berlin starting in 1989, the Love Parade became the largest street party with over 1 million visitors, inspiring other such popular celebrations of electronic music. Contemporary electronic music includes many varieties and ranges from experimental art music to popular forms such as electronic dance music. Pop electronic music is most recognizable in its 4/4 form and more connected with the mainstream than preceding forms which were popular in niche markets.
2001-10-26T22:37:02Z
2023-12-29T21:49:50Z
[ "Template:Wikicite", "Template:Div col end", "Template:Library resources box", "Template:Electronic music", "Template:Electronic music top", "Template:Harvnb", "Template:Further", "Template:See also", "Template:Ill", "Template:Cite web", "Template:Citation", "Template:Cite patent", "Template:Hatnote group", "Template:Use dmy dates", "Template:Anchor", "Template:Cbignore", "Template:ISBN", "Template:Nowrap", "Template:Cite journal", "Template:Unordered list", "Template:Main", "Template:Failed verification", "Template:Webarchive", "Template:Other uses", "Template:Reflist", "Template:Multiple image", "Template:Blockquote", "Template:Citation needed", "Template:Full citation needed", "Template:Subscription required", "Template:Short description", "Template:Redirect", "Template:US patent", "Template:Em", "Template:Div col", "Template:Wikiquote", "Template:Electronica", "Template:Sfn", "Template:Importance section", "Template:Portal", "Template:Cite book", "Template:Cite news", "Template:Authority control", "Template:Clarify", "Template:Confusing section", "Template:Disputed", "Template:Discogs release", "Template:Cite magazine", "Template:Commons category-inline", "Template:Infobox music genre", "Template:Sic" ]
https://en.wikipedia.org/wiki/Electronic_music
9,514
Edvard Grieg
Edvard Hagerup Grieg (/ɡriːɡ/ GREEG, Norwegian: [ˈɛ̀dvɑʈ ˈhɑ̀ːɡərʉp ˈɡrɪɡː]; 15 June 1843 – 4 September 1907) was a Norwegian composer and pianist. He is widely considered one of the leading Romantic era composers, and his music is part of the standard classical repertoire worldwide. His use of Norwegian folk music in his own compositions brought the music of Norway to fame, as well as helping to develop a national identity, much as Jean Sibelius did in Finland and Bedřich Smetana in Bohemia. Grieg is the most celebrated person from the city of Bergen, with numerous statues which depict his image, and many cultural entities named after him: the city's largest concert building (Grieg Hall), its most advanced music school (Grieg Academy) and its professional choir (Edvard Grieg Kor). The Edvard Grieg Museum at Grieg's former home Troldhaugen is dedicated to his legacy. Edvard Hagerup Grieg was born in Bergen, Norway (then part of Sweden–Norway). His parents were Alexander Grieg (1806–1875), a merchant and the British Vice-Consul in Bergen; and Gesine Judithe Hagerup (1814–1875), a music teacher and daughter of solicitor and politician Edvard Hagerup. The family name, originally spelled Greig, is associated with the Scottish Clann Ghriogair (Clan Gregor). After the Battle of Culloden in Scotland in 1746, Grieg's great-grandfather, Alexander Greig (1739-1803), travelled widely before settling in Norway about 1770 and establishing business interests in Bergen. Grieg's paternal great-great-grandparents, John (1702-1774) and Anne (1704-1784), are buried in the abandoned churchyard of the ruinous Church of St Ethernan in Rathen, Aberdeenshire, Scotland. Edvard Grieg was raised in a musical family. His mother was his first piano teacher and taught him to play when he was aged six. He studied in several schools, including Tanks Upper Secondary School. During the summer of 1858, Grieg met the eminent Norwegian violinist Ole Bull, who was a family friend; Bull's brother was married to Grieg's aunt. Bull recognized the 15-year-old boy's talent and persuaded his parents to send him to the Leipzig Conservatory, the piano department of which was directed by Ignaz Moscheles. Grieg enrolled in the conservatory, concentrating on piano, and enjoyed the many concerts and recitals given in Leipzig. He disliked the discipline of the conservatory course of study. An exception was the organ, which was mandatory for piano students. About his study in the conservatory, he wrote to his biographer, Aimar Grønvold, in 1881: "I must admit, unlike Svendsen, that I left Leipzig Conservatory just as stupid as I entered it. Naturally, I did learn something there, but my individuality was still a closed book to me." During the spring of 1860, he survived two life-threatening lung diseases, pleurisy and tuberculosis. Throughout his life, Grieg's health was impaired by a destroyed left lung and considerable deformity of his thoracic spine. He suffered from numerous respiratory infections, and ultimately developed combined lung and heart failure. Grieg was admitted many times to spas and sanatoria both in Norway and abroad. Several of his doctors became his friends. During 1861, Grieg made his debut as a concert pianist in Karlshamn, Sweden. In 1862, he finished his studies in Leipzig and had his first concert in his home town, where his program included Beethoven's Pathétique sonata. In 1863, Grieg went to Copenhagen, Denmark, and stayed there for three years. He met the Danish composers J. P. E. Hartmann and Niels Gade. He also met his fellow Norwegian composer Rikard Nordraak (composer of the Norwegian national anthem), who became a good friend and source of inspiration. Nordraak died in 1866, and Grieg composed a funeral march in his honor. On 11 June 1867, Grieg married his first cousin, Nina Hagerup (1845–1935), a lyric soprano. The next year, their only child, Alexandra, was born. Alexandra died in 1869 from meningitis. During the summer of 1868, Grieg wrote his Piano Concerto in A minor while on holiday in Denmark. Edmund Neupert gave the concerto its premiere performance on 3 April 1869 in the Casino Theatre in Copenhagen. Grieg himself was unable to be there due to conducting commitments in Christiania (now Oslo). During 1868, Franz Liszt, who had not yet met Grieg, wrote a testimonial for him to the Norwegian Ministry of Education, which resulted in Grieg's obtaining a travel grant. The two men met in Rome in 1870. During Grieg's first visit, they examined Grieg's Violin Sonata No. 1, which pleased Liszt greatly. On his second visit in April, Grieg brought with him the manuscript of his Piano Concerto, which Liszt proceeded to sightread (including the orchestral arrangement). Liszt's rendition greatly impressed his audience, although Grieg said gently to him that he played the first movement too quickly. Liszt also gave Grieg some advice on orchestration (for example, to give the melody of the second theme in the first movement to a solo trumpet, which Grieg himself chose not to accept). In the 1870s he became friends with the poet Bjørnstjerne Bjørnson who shared his interests in Norwegian self-government. Grieg set several of his poems to music, including Landkjenning and Sigurd Jorsalfar. Eventually they decided on an opera based on King Olav Trygvason, but a dispute as to whether music or lyrics should be created first, led to Grieg being diverted to working on incidental music for Henrik Ibsen's play Peer Gynt, which naturally offended Bjørnson. Eventually their friendship was resumed. The incidental music composed for Peer Gynt at the request of the author, contributed to its success, and has separately become some of the composer's most familiar music arranged as orchestral Suites. Grieg had close ties with the Bergen Philharmonic Orchestra (Harmonien), and later became Music Director of the orchestra from 1880 to 1882. In 1888, Grieg met Tchaikovsky in Leipzig. Grieg was impressed by Tchaikovsky. Tchaikovsky thought very highly of Grieg's music, praising its beauty, originality and warmth. On 6 December 1897, Grieg and his wife performed some of his music at a private concert at Windsor Castle for Queen Victoria and her court. Grieg was awarded two honorary doctorates, first by the University of Cambridge in 1894 and the next from the University of Oxford in 1906. The Norwegian government provided Grieg with a pension as he reached retirement age. During the spring of 1903, Grieg made nine 78-rpm gramophone recordings of his piano music in Paris. All of these discs have been reissued on both LPs and CDs, despite limited fidelity. Grieg recorded player piano music rolls for the Hupfeld Phonola piano-player system and Welte-Mignon reproducing system, all of which survive and can be heard today. He also worked with the Aeolian Company for its 'Autograph Metrostyle' piano roll series wherein he indicated the tempo mapping for many of his pieces. In 1899, Grieg cancelled his concerts in France in protest of the Dreyfus affair, an antisemitic scandal that was then roiling French politics. Regarding this scandal, Grieg had written that he hoped that the French might, "Soon return to the spirit of 1789, when the French republic declared that it would defend basic human rights." As a result of his statements concerning the affair, he became the target of much French hate mail of that day. During 1906, he met the composer and pianist Percy Grainger in London. Grainger was a great admirer of Grieg's music and a strong empathy was quickly established. In a 1907 interview, Grieg stated: "I have written Norwegian Peasant Dances that no one in my country can play, and here comes this Australian who plays them as they ought to be played! He is a genius that we Scandinavians cannot do other than love." Edvard Grieg died at the Municipal Hospital in Bergen, Norway, on 4 September 1907 at age 64 from heart failure. He had suffered a long period of illness. His last words were "Well, if it must be so." The funeral drew between 30,000 and 40,000 people to the streets of his home town to honor him. Obeying his wish, his own Funeral March in Memory of Rikard Nordraak was played with orchestration by his friend Johan Halvorsen, who had married Grieg's niece. In addition, the Funeral March movement from Chopin's Piano Sonata No. 2 was played. Grieg was cremated in the first Norwegian crematorium opened in Bergen just that year, and his ashes were entombed in a mountain crypt near his house, Troldhaugen. After the death of his wife, her ashes were placed alongside his. Edvard Grieg and his wife were Unitarians and Nina attended the Unitarian church in Copenhagen after his death. A century after his death, Grieg's legacy extends beyond the field of music. There is a large sculpture of Grieg in Seattle, while one of the largest hotels in Bergen (his hometown) is named Quality Hotel Edvard Grieg and a large crater on the planet Mercury is named after Grieg. Some of Grieg's early works include a symphony (which he later suppressed) and a piano sonata. He wrote three violin sonatas and a cello sonata. Grieg composed the incidental music for Henrik Ibsen's play Peer Gynt, which includes the excerpts "In the Hall of the Mountain King" and "Morning Mood". In an 1874 letter to his friend Frants Beyer, Grieg expressed his unhappiness with "Dance of the Mountain King's Daughter", one of the movements in the Peer Gynt incidental music, writing "I have also written something for the scene in the hall of the mountain King – something that I literally can't bear listening to because it absolutely reeks of cow-pies, exaggerated Norwegian nationalism, and trollish self-satisfaction! But I have a hunch that the irony will be discernible." Grieg's Holberg Suite was originally written for the piano, and later arranged by the composer for string orchestra. Grieg wrote songs in which he set lyrics by poets Heinrich Heine, Johann Wolfgang von Goethe, Henrik Ibsen, Hans Christian Andersen, Rudyard Kipling and others. Russian composer Nikolai Myaskovsky used a theme by Grieg for the variations with which he closed his Third String Quartet. Norwegian pianist Eva Knardahl recorded the composer's complete piano music on 13 LPs for BIS Records from 1977 to 1980. The recordings were reissued during 2006 on 12 compact discs, also on BIS Records. Grieg himself recorded many of these piano works before his death in 1907. Pianist Bertha Tapper edited Grieg’s piano works for publication in America by Oliver Ditson. Notes Bibliography
[ { "paragraph_id": 0, "text": "Edvard Hagerup Grieg (/ɡriːɡ/ GREEG, Norwegian: [ˈɛ̀dvɑʈ ˈhɑ̀ːɡərʉp ˈɡrɪɡː]; 15 June 1843 – 4 September 1907) was a Norwegian composer and pianist. He is widely considered one of the leading Romantic era composers, and his music is part of the standard classical repertoire worldwide. His use of Norwegian folk music in his own compositions brought the music of Norway to fame, as well as helping to develop a national identity, much as Jean Sibelius did in Finland and Bedřich Smetana in Bohemia.", "title": "" }, { "paragraph_id": 1, "text": "Grieg is the most celebrated person from the city of Bergen, with numerous statues which depict his image, and many cultural entities named after him: the city's largest concert building (Grieg Hall), its most advanced music school (Grieg Academy) and its professional choir (Edvard Grieg Kor). The Edvard Grieg Museum at Grieg's former home Troldhaugen is dedicated to his legacy.", "title": "" }, { "paragraph_id": 2, "text": "Edvard Hagerup Grieg was born in Bergen, Norway (then part of Sweden–Norway). His parents were Alexander Grieg (1806–1875), a merchant and the British Vice-Consul in Bergen; and Gesine Judithe Hagerup (1814–1875), a music teacher and daughter of solicitor and politician Edvard Hagerup. The family name, originally spelled Greig, is associated with the Scottish Clann Ghriogair (Clan Gregor). After the Battle of Culloden in Scotland in 1746, Grieg's great-grandfather, Alexander Greig (1739-1803), travelled widely before settling in Norway about 1770 and establishing business interests in Bergen. Grieg's paternal great-great-grandparents, John (1702-1774) and Anne (1704-1784), are buried in the abandoned churchyard of the ruinous Church of St Ethernan in Rathen, Aberdeenshire, Scotland.", "title": "Background" }, { "paragraph_id": 3, "text": "Edvard Grieg was raised in a musical family. His mother was his first piano teacher and taught him to play when he was aged six. He studied in several schools, including Tanks Upper Secondary School.", "title": "Background" }, { "paragraph_id": 4, "text": "During the summer of 1858, Grieg met the eminent Norwegian violinist Ole Bull, who was a family friend; Bull's brother was married to Grieg's aunt. Bull recognized the 15-year-old boy's talent and persuaded his parents to send him to the Leipzig Conservatory, the piano department of which was directed by Ignaz Moscheles.", "title": "Background" }, { "paragraph_id": 5, "text": "Grieg enrolled in the conservatory, concentrating on piano, and enjoyed the many concerts and recitals given in Leipzig. He disliked the discipline of the conservatory course of study. An exception was the organ, which was mandatory for piano students. About his study in the conservatory, he wrote to his biographer, Aimar Grønvold, in 1881: \"I must admit, unlike Svendsen, that I left Leipzig Conservatory just as stupid as I entered it. Naturally, I did learn something there, but my individuality was still a closed book to me.\"", "title": "Background" }, { "paragraph_id": 6, "text": "During the spring of 1860, he survived two life-threatening lung diseases, pleurisy and tuberculosis. Throughout his life, Grieg's health was impaired by a destroyed left lung and considerable deformity of his thoracic spine. He suffered from numerous respiratory infections, and ultimately developed combined lung and heart failure. Grieg was admitted many times to spas and sanatoria both in Norway and abroad. Several of his doctors became his friends.", "title": "Background" }, { "paragraph_id": 7, "text": "During 1861, Grieg made his debut as a concert pianist in Karlshamn, Sweden. In 1862, he finished his studies in Leipzig and had his first concert in his home town, where his program included Beethoven's Pathétique sonata.", "title": "Career" }, { "paragraph_id": 8, "text": "In 1863, Grieg went to Copenhagen, Denmark, and stayed there for three years. He met the Danish composers J. P. E. Hartmann and Niels Gade. He also met his fellow Norwegian composer Rikard Nordraak (composer of the Norwegian national anthem), who became a good friend and source of inspiration. Nordraak died in 1866, and Grieg composed a funeral march in his honor.", "title": "Career" }, { "paragraph_id": 9, "text": "On 11 June 1867, Grieg married his first cousin, Nina Hagerup (1845–1935), a lyric soprano. The next year, their only child, Alexandra, was born. Alexandra died in 1869 from meningitis. During the summer of 1868, Grieg wrote his Piano Concerto in A minor while on holiday in Denmark. Edmund Neupert gave the concerto its premiere performance on 3 April 1869 in the Casino Theatre in Copenhagen. Grieg himself was unable to be there due to conducting commitments in Christiania (now Oslo).", "title": "Career" }, { "paragraph_id": 10, "text": "During 1868, Franz Liszt, who had not yet met Grieg, wrote a testimonial for him to the Norwegian Ministry of Education, which resulted in Grieg's obtaining a travel grant. The two men met in Rome in 1870. During Grieg's first visit, they examined Grieg's Violin Sonata No. 1, which pleased Liszt greatly. On his second visit in April, Grieg brought with him the manuscript of his Piano Concerto, which Liszt proceeded to sightread (including the orchestral arrangement). Liszt's rendition greatly impressed his audience, although Grieg said gently to him that he played the first movement too quickly. Liszt also gave Grieg some advice on orchestration (for example, to give the melody of the second theme in the first movement to a solo trumpet, which Grieg himself chose not to accept).", "title": "Career" }, { "paragraph_id": 11, "text": "In the 1870s he became friends with the poet Bjørnstjerne Bjørnson who shared his interests in Norwegian self-government. Grieg set several of his poems to music, including Landkjenning and Sigurd Jorsalfar. Eventually they decided on an opera based on King Olav Trygvason, but a dispute as to whether music or lyrics should be created first, led to Grieg being diverted to working on incidental music for Henrik Ibsen's play Peer Gynt, which naturally offended Bjørnson. Eventually their friendship was resumed.", "title": "Career" }, { "paragraph_id": 12, "text": "The incidental music composed for Peer Gynt at the request of the author, contributed to its success, and has separately become some of the composer's most familiar music arranged as orchestral Suites.", "title": "Career" }, { "paragraph_id": 13, "text": "Grieg had close ties with the Bergen Philharmonic Orchestra (Harmonien), and later became Music Director of the orchestra from 1880 to 1882. In 1888, Grieg met Tchaikovsky in Leipzig. Grieg was impressed by Tchaikovsky. Tchaikovsky thought very highly of Grieg's music, praising its beauty, originality and warmth.", "title": "Career" }, { "paragraph_id": 14, "text": "On 6 December 1897, Grieg and his wife performed some of his music at a private concert at Windsor Castle for Queen Victoria and her court.", "title": "Career" }, { "paragraph_id": 15, "text": "Grieg was awarded two honorary doctorates, first by the University of Cambridge in 1894 and the next from the University of Oxford in 1906.", "title": "Career" }, { "paragraph_id": 16, "text": "The Norwegian government provided Grieg with a pension as he reached retirement age. During the spring of 1903, Grieg made nine 78-rpm gramophone recordings of his piano music in Paris. All of these discs have been reissued on both LPs and CDs, despite limited fidelity. Grieg recorded player piano music rolls for the Hupfeld Phonola piano-player system and Welte-Mignon reproducing system, all of which survive and can be heard today. He also worked with the Aeolian Company for its 'Autograph Metrostyle' piano roll series wherein he indicated the tempo mapping for many of his pieces.", "title": "Career" }, { "paragraph_id": 17, "text": "In 1899, Grieg cancelled his concerts in France in protest of the Dreyfus affair, an antisemitic scandal that was then roiling French politics. Regarding this scandal, Grieg had written that he hoped that the French might, \"Soon return to the spirit of 1789, when the French republic declared that it would defend basic human rights.\" As a result of his statements concerning the affair, he became the target of much French hate mail of that day.", "title": "Career" }, { "paragraph_id": 18, "text": "During 1906, he met the composer and pianist Percy Grainger in London. Grainger was a great admirer of Grieg's music and a strong empathy was quickly established. In a 1907 interview, Grieg stated: \"I have written Norwegian Peasant Dances that no one in my country can play, and here comes this Australian who plays them as they ought to be played! He is a genius that we Scandinavians cannot do other than love.\"", "title": "Career" }, { "paragraph_id": 19, "text": "Edvard Grieg died at the Municipal Hospital in Bergen, Norway, on 4 September 1907 at age 64 from heart failure. He had suffered a long period of illness. His last words were \"Well, if it must be so.\"", "title": "Career" }, { "paragraph_id": 20, "text": "The funeral drew between 30,000 and 40,000 people to the streets of his home town to honor him. Obeying his wish, his own Funeral March in Memory of Rikard Nordraak was played with orchestration by his friend Johan Halvorsen, who had married Grieg's niece. In addition, the Funeral March movement from Chopin's Piano Sonata No. 2 was played. Grieg was cremated in the first Norwegian crematorium opened in Bergen just that year, and his ashes were entombed in a mountain crypt near his house, Troldhaugen. After the death of his wife, her ashes were placed alongside his.", "title": "Career" }, { "paragraph_id": 21, "text": "Edvard Grieg and his wife were Unitarians and Nina attended the Unitarian church in Copenhagen after his death.", "title": "Career" }, { "paragraph_id": 22, "text": "A century after his death, Grieg's legacy extends beyond the field of music. There is a large sculpture of Grieg in Seattle, while one of the largest hotels in Bergen (his hometown) is named Quality Hotel Edvard Grieg and a large crater on the planet Mercury is named after Grieg.", "title": "Career" }, { "paragraph_id": 23, "text": "Some of Grieg's early works include a symphony (which he later suppressed) and a piano sonata. He wrote three violin sonatas and a cello sonata.", "title": "Music" }, { "paragraph_id": 24, "text": "Grieg composed the incidental music for Henrik Ibsen's play Peer Gynt, which includes the excerpts \"In the Hall of the Mountain King\" and \"Morning Mood\". In an 1874 letter to his friend Frants Beyer, Grieg expressed his unhappiness with \"Dance of the Mountain King's Daughter\", one of the movements in the Peer Gynt incidental music, writing \"I have also written something for the scene in the hall of the mountain King – something that I literally can't bear listening to because it absolutely reeks of cow-pies, exaggerated Norwegian nationalism, and trollish self-satisfaction! But I have a hunch that the irony will be discernible.\"", "title": "Music" }, { "paragraph_id": 25, "text": "Grieg's Holberg Suite was originally written for the piano, and later arranged by the composer for string orchestra. Grieg wrote songs in which he set lyrics by poets Heinrich Heine, Johann Wolfgang von Goethe, Henrik Ibsen, Hans Christian Andersen, Rudyard Kipling and others. Russian composer Nikolai Myaskovsky used a theme by Grieg for the variations with which he closed his Third String Quartet. Norwegian pianist Eva Knardahl recorded the composer's complete piano music on 13 LPs for BIS Records from 1977 to 1980. The recordings were reissued during 2006 on 12 compact discs, also on BIS Records. Grieg himself recorded many of these piano works before his death in 1907. Pianist Bertha Tapper edited Grieg’s piano works for publication in America by Oliver Ditson.", "title": "Music" }, { "paragraph_id": 26, "text": "Notes", "title": "References" }, { "paragraph_id": 27, "text": "Bibliography", "title": "References" } ]
Edvard Hagerup Grieg was a Norwegian composer and pianist. He is widely considered one of the leading Romantic era composers, and his music is part of the standard classical repertoire worldwide. His use of Norwegian folk music in his own compositions brought the music of Norway to fame, as well as helping to develop a national identity, much as Jean Sibelius did in Finland and Bedřich Smetana in Bohemia. Grieg is the most celebrated person from the city of Bergen, with numerous statues which depict his image, and many cultural entities named after him: the city's largest concert building, its most advanced music school and its professional choir. The Edvard Grieg Museum at Grieg's former home Troldhaugen is dedicated to his legacy.
2001-08-01T20:27:11Z
2023-12-30T14:51:34Z
[ "Template:Use dmy dates", "Template:Cite book", "Template:Cite journal", "Template:Edvard Grieg", "Template:Redirect", "Template:Infobox person", "Template:IPAc-en", "Template:IMSLP", "Template:Plays audio", "Template:Listen", "Template:Main", "Template:Harvnb", "Template:Full citation needed", "Template:Commons category", "Template:Sfn", "Template:Portal", "Template:Cite web", "Template:Cite news", "Template:Subscription", "Template:OL author", "Template:Romantic music", "Template:Short description", "Template:Musical nationalism", "Template:Authority control", "Template:Reflist", "Template:Cite encyclopedia", "Template:ISBN", "Template:Respell", "Template:IPA-no", "Template:Spaced ndash" ]
https://en.wikipedia.org/wiki/Edvard_Grieg
9,515
Emancipation Proclamation
The Emancipation Proclamation, officially Proclamation 95, was a presidential proclamation and executive order issued by United States President Abraham Lincoln on January 1, 1863, during the American Civil War. The Proclamation had the effect of changing the legal status of more than 3.5 million enslaved African Americans in the secessionist Confederate states from enslaved to free. As soon as slaves escaped the control of their enslavers, either by fleeing to Union lines or through the advance of federal troops, they were permanently free. In addition, the Proclamation allowed for former slaves to "be received into the armed service of the United States". The Emancipation Proclamation was a significant part of the end of slavery in the United States. On September 22, 1862, Lincoln issued the preliminary Emancipation Proclamation. Its third paragraph reads: That on the first day of January, in the year of our Lord, one thousand eight hundred and sixty-three, all persons held as slaves within any State or designated part of a State, the people whereof shall then be in rebellion against the United States, shall be then, thenceforward, and forever free; and the executive government of the United States, including the military and naval authority thereof, will recognize and maintain the freedom of such persons, and will do no act or acts to repress such persons, or any of them, in any efforts they may make for their actual freedom. On January 1, 1863, Lincoln issued the final Emancipation Proclamation. After quoting from the preliminary Emancipation Proclamation, it stated: I, Abraham Lincoln, President of the United States, by virtue of the power in me vested as Commander-in-Chief, of the Army and Navy of the United States in time of actual armed rebellion against authority and government of the United States, and as a fit and necessary war measure for suppressing said rebellion, do ... order and designate as the States and parts of States wherein the people thereof respectively, are this day in rebellion, against the United States, the following, towit: Lincoln then listed the ten states still in rebellion, excluding parts of states under Union control, and continued: I do order and declare that all persons held as slaves within said designated States, and parts of States, are, and henceforward shall be free. ... [S]uch persons of suitable condition, will be received into the armed service of the United States. ... And upon this act, sincerely believed to be an act of justice, warranted by the Constitution, upon military necessity, I invoke the considerate judgment of mankind, and the gracious favor of Almighty God. The proclamation provided that the executive branch, including the Army and Navy, "will recognize and maintain the freedom of said persons". Even though it excluded states not in rebellion, as well as parts of Louisiana and Virginia under Union control, it still applied to more than 3.5 million of the 4 million enslaved people in the country. Around 25,000 to 75,000 were immediately emancipated in those regions of the Confederacy where the US Army was already in place. It could not be enforced in the areas still in rebellion, but, as the Union army took control of Confederate regions, the Proclamation provided the legal framework for the liberation of more than three and a half million enslaved people in those regions by the end of the war. The Emancipation Proclamation outraged white Southerners and their sympathizers, who saw it as the beginning of a race war. It energized abolitionists, and undermined those Europeans who wanted to intervene to help the Confederacy. The Proclamation lifted the spirits of African Americans, both free and enslaved. It encouraged many to escape from slavery and flee toward Union lines, where many joined the Union Army. The Emancipation Proclamation became a historic document because it "would redefine the Civil War, turning it [for the North] from a struggle [solely] to preserve the Union to one [also] focused on ending slavery, and set a decisive course for how the nation would be reshaped after that historic conflict." The Emancipation Proclamation was never challenged in court. To ensure the abolition of slavery in all of the U.S., Lincoln also insisted that Reconstruction plans for Southern states require them to enact laws abolishing slavery (which occurred during the war in Tennessee, Arkansas, and Louisiana); Lincoln encouraged border states to adopt abolition (which occurred during the war in Maryland, Missouri, and West Virginia) and pushed for passage of the 13th Amendment. The Senate passed the 13th Amendment by the necessary two-thirds vote on April 8, 1864; the House of Representatives did so on January 31, 1865; and the required three-fourths of the states ratified it on December 6, 1865. The amendment made slavery and involuntary servitude unconstitutional, "except as a punishment for a crime". The United States Constitution of 1787 did not use the word "slavery" but included several provisions about unfree persons. The Three-Fifths Compromise (in Article I, Section 2) allocated congressional representation based "on the whole Number of free Persons" and "three-fifths of all other Persons". Under the Fugitive Slave Clause (Article IV, Section 2), "No person held to Service or Labour in one State" would become legally free by escaping to another. Article I, Section 9 allowed Congress to pass legislation to outlaw the "Importation of Persons", but not until 1808. However, for purposes of the Fifth Amendment—which states that, "No person shall ... be deprived of life, liberty, or property, without due process of law"—slaves were understood to be property. Although abolitionists used the Fifth Amendment to argue against slavery, it was made part of the legal basis for treating slaves as property by Dred Scott v. Sandford (1857). Slavery was also supported in law and in practice by a pervasive culture of white supremacy. Nonetheless, between 1777 and 1804, every Northern state provided for the immediate or gradual abolition of slavery. No Southern state did so, and the slave population of the South continued to grow, peaking at almost four million people at the beginning of the Civil War, when most slave states sought to break away from the United States. Lincoln understood that the federal government's power to end slavery in peacetime was limited by the Constitution, which, before 1865, committed the issue to individual states. During the Civil War, however, Lincoln issued the Emancipation Proclamation under his authority as "Commander in Chief of the Army and Navy" under Article II, section 2 of the United States Constitution. As such, in the Emancipation Proclamation he claimed to have the authority to free persons held as slaves in those states that were in rebellion "as a fit and necessary war measure for suppressing said rebellion". Lincoln also cited the Confiscation Act of 1861 and Confiscation Act of 1862 passed by Congress as sources for his authority in the Preliminary Emancipation Proclamation, but he did not mention these in the Emancipation Proclamation itself. He did not have such authority over the four border slave-holding states that were not in rebellion—Missouri, Kentucky, Maryland and Delaware—so those states were not named in the Proclamation. The fifth border jurisdiction, West Virginia, where slavery remained legal but was in the process of being abolished, was, in January 1863, still part of the legally recognized "reorganized" state of Virginia, based in Alexandria, which was in the Union (as opposed to the Confederate state of Virginia, based in Richmond). The Emancipation Proclamation did not free all slaves in the U.S., contrary to a common misconception; it applied in the ten states that were still in rebellion on January 1, 1863, but it did not cover the nearly 500,000 slaves in the slaveholding border states (Missouri, Kentucky, Maryland, and Delaware) or in parts of Virginia and Louisiana that were no longer in rebellion. Those slaves were freed by later separate state and federal actions. The areas covered were "Arkansas, Texas, Louisiana (except the Parishes of St. Bernard, Plaquemines, Jefferson, St. John, St. Charles, St. James, Ascension, Assumption, Terrebonne, Lafourche, St. Mary, St. Martin, and Orleans, including the city of New Orleans), Mississippi, Alabama, Florida, Georgia, South Carolina, North Carolina, and Virginia (except the forty-eight counties designated as West Virginia, and also the counties of Berkley, Accomac, Northampton, Elizabeth City, York, Princess Ann, and Norfolk, including the cities of Norfolk and Portsmouth)." The state of Tennessee had already mostly returned to Union control, under a recognized Union government, so it was not named and was exempted. Virginia was named, but exemptions were specified for the 48 counties then in the process of forming the new state of West Virginia, and seven additional counties and two cities in the Union-controlled Tidewater region of Virginia. Also specifically exempted were New Orleans and 13 named parishes of Louisiana, which were mostly under federal control at the time of the Emancipation Proclamation. These exemptions left unemancipated an additional 300,000 slaves. The Emancipation Proclamation has been ridiculed, notably by Richard Hofstadter, who wrote that it "had all the moral grandeur of a bill of lading" and "declared free all slaves ... precisely where its effect could not reach". Disagreeing with Hofstadter, William W. Freehling wrote that Lincoln's asserting his power as Commander-in-Chief to issue the proclamation "reads not like an entrepreneur's bill for past services but like a warrior's brandishing of a new weapon". The Emancipation Proclamation resulted in the emancipation of a substantial percentage of the slaves in the Confederate states as the Union armies advanced through the South and slaves escaped to Union lines, or slave owners fled, leaving slaves behind. The Emancipation Proclamation also committed the Union to ending slavery in addition to preserving the Union. Although the Emancipation Proclamation had freed most slaves as a war measure, it had not made slavery illegal. Of the states that were exempted from the Emancipation Proclamation, Maryland, Missouri, Tennessee, and West Virginia prohibited slavery before the war ended. In 1863, President Lincoln proposed a moderate plan for the Reconstruction of the captured Confederate State of Louisiana. Only 10 percent of the state's electorate had to take the loyalty oath. The state was also required to accept the Emancipation Proclamation and abolish slavery in its new constitution. By December 1864, the Lincoln plan abolishing slavery had been enacted not only in Louisiana, but also in Arkansas and Tennessee. In Kentucky, Union Army commanders relied on the proclamation's offer of freedom to slaves who enrolled in the Army and provided freedom for an enrollee's entire family; for this and other reasons, the number of slaves in the state fell by more than 70 percent during the war. However, in Delaware and Kentucky, slavery continued to be legal until December 18, 1865, when the Thirteenth Amendment went into effect. The Fugitive Slave Act of 1850 required individuals to return runaway slaves to their owners. During the war, in May 1861, Union general Benjamin Butler declared that slaves who escaped to Union lines were contraband of war, and accordingly he refused to return them. On May 30, after a cabinet meeting called by President Lincoln, "Simon Cameron, the secretary of war, telegraphed Butler to inform him that his contraband policy 'is approved.'" This decision was controversial because it could have been taken to imply recognition of the Confederacy as a separate, independent sovereign state under international law, a notion that Lincoln steadfastly denied. In addition, as contraband, these people were legally designated as "property" when they crossed Union lines and their ultimate status was uncertain. In December 1861, Lincoln sent his first annual message to Congress (the State of the Union Address, but then typically given in writing and not referred to as such). In it he praised the free labor system for respecting human rights over property rights; he endorsed legislation to address the status of contraband slaves and slaves in loyal states, possibly through buying their freedom with federal money; and he endorsed federal funding of voluntary colonization. In January 1862, Thaddeus Stevens, the Republican leader in the House, called for total war against the rebellion to include emancipation of slaves, arguing that emancipation, by forcing the loss of enslaved labor, would ruin the rebel economy. On March 13, 1862, Congress approved an Act Prohibiting the Return of Slaves, which prohibited "All officers or persons in the military or naval service of the United States" from returning fugitive slaves to their owners. Pursuant to a law signed by Lincoln, slavery was abolished in the District of Columbia on April 16, 1862, and owners were compensated. On June 19, 1862, Congress prohibited slavery in all current and future United States territories (though not in the states), and President Lincoln quickly signed the legislation. This act effectively repudiated the 1857 opinion of the Supreme Court of the United States in the Dred Scott case that Congress was powerless to regulate slavery in U.S. territories. It also rejected the notion of popular sovereignty that had been advanced by Stephen A. Douglas as a solution to the slavery controversy, while completing the effort first legislatively proposed by Thomas Jefferson in 1784 to confine slavery within the borders of existing states. On August 6, 1861, the First Confiscation Act freed the slaves who were employed "against the Government and lawful authority of the United States." On July 17, 1862, the Second Confiscation Act freed the slaves "within any place occupied by rebel forces and afterwards occupied by forces of the United States." The Second Confiscation Act, unlike the First Confiscation Act, explicitly provided that all slaves covered by it would be permanently freed, stating in section 10 that "all slaves of persons who shall hereafter be engaged in rebellion against the government of the United States, or who shall in any way give aid or comfort thereto, escaping from such persons and taking refuge within the lines of the army; and all slaves captured from such persons or deserted by them and coming under the control of the government of the United States; and all slaves of such person found on [or] being within any place occupied by rebel forces and afterwards occupied by the forces of the United States, shall be deemed captives of war, and shall be forever free of their servitude, and not again held as slaves." However, Lincoln's position continued to be that, although Congress lacked the power to free the slaves in rebel-held states, he, as commander in chief, could do so if he deemed it a proper military measure. By this time, in the summer of 1862, Lincoln had drafted the preliminary Emancipation Proclamation, which he issued on September 22, 1862. It declared that, on January 1, 1863, he would free the slaves in states still in rebellion. Lincoln's preliminary Emancipation Proclamation cited both Confiscations Acts as sources for his authority to issue the Emancipation Proclamation, although neither of these acts would be mentioned in the text of the Emancipation Proclamation itself. Abolitionists had long been urging Lincoln to free all slaves. In the summer of 1862, Republican editor Horace Greeley of the highly influential New-York Tribune wrote a famous editorial entitled "The Prayer of Twenty Millions" demanding a more aggressive attack on the Confederacy and faster emancipation of the slaves: "On the face of this wide earth, Mr. President, there is not one ... intelligent champion of the Union cause who does not feel ... that the rebellion, if crushed tomorrow, would be renewed if slavery were left in full vigor and that every hour of deference to slavery is an hour of added and deepened peril to the Union." Lincoln responded in his open letter to Horace Greeley of August 22, 1862: If there be those who would not save the Union, unless they could at the same time save slavery, I do not agree with them. If there be those who would not save the Union unless they could at the same time destroy slavery, I do not agree with them. My paramount object in this struggle is to save the Union, and is not either to save or to destroy slavery. If I could save the Union without freeing any slave I would do it, and if I could save it by freeing all the slaves I would do it; and if I could save it by freeing some and leaving others alone I would also do that. What I do about slavery, and the colored race, I do because I believe it helps to save the Union; and what I forbear, I forbear because I do not believe it would help to save the Union.... I have here stated my purpose according to my view of official duty; and I intend no modification of my oft-expressed personal wish that all men everywhere could be free. Lincoln scholar Harold Holzer wrote about Lincoln's letter: "Unknown to Greeley, Lincoln composed this after he had already drafted a preliminary Emancipation Proclamation, which he had determined to issue after the next Union military victory. Therefore, this letter, was in truth, an attempt to position the impending announcement in terms of saving the Union, not freeing slaves as a humanitarian gesture. It was one of Lincoln's most skillful public relations efforts, even if it has cast longstanding doubt on his sincerity as a liberator." Historian Richard Striner argues that "for years" Lincoln's letter has been misread as "Lincoln only wanted to save the Union." However, within the context of Lincoln's entire career and pronouncements on slavery this interpretation is wrong, according to Striner. Rather, Lincoln was softening the strong Northern white supremacist opposition to his imminent emancipation by tying it to the cause of the Union. This opposition would fight for the Union but not to end slavery, so Lincoln gave them the means and motivation to do both, at the same time. In effect, then, Lincoln may have already chosen the third option he mentioned to Greeley: "freeing some and leaving others alone"; that is, freeing slaves in the states still in rebellion on January 1, 1863, but leaving enslaved those in the border states and Union-occupied areas. Nevertheless, in the Preliminary Emancipation Proclamation itself, Lincoln said that he would recommend to Congress that it compensate states that "adopt, immediate, or gradual abolishment of slavery". In addition, during the hundred days between September 22, 1862, when he issued the Preliminary Emancipation Proclamation, and January 1, 1863, when he issued the Final Emancipation Proclamation, Lincoln took actions that suggest that he continued to consider the first option he mentioned to Greeley — saving the Union without freeing any slave — a possibility. Historian William W. Freehling wrote, "From mid-October to mid-November 1862, he sent personal envoys to Louisiana, Tennessee, and Arkansas". Each of these envoys carried with him a letter from Lincoln stating that if the people of their state desired "to avoid the unsatisfactory" terms of the Final Emancipation Proclamation "and to have peace again upon the old terms" (i.e., with slavery intact), they should rally "the largest number of the people possible" to vote in "elections of members to the Congress of the United States ... friendly to their object". Later, in his Annual Message to Congress of December 1, 1862, Lincoln proposed an amendment to the U.S. Constitution providing that any state that abolished slavery before January 1, 1900, would receive compensation from the United States in the form of interest-bearing U.S. bonds. Adoption of this amendment, in theory, could have ended the war without ever permanently ending slavery, because the amendment provided, "Any State having received bonds ... and afterwards reintroducing or tolerating slavery therein, shall refund to the United States the bonds so received, or the value thereof, and all interest paid thereon". In his 2014 book, Lincoln's Gamble, journalist and historian Todd Brewster asserted that Lincoln's desire to reassert the saving of the Union as his sole war goal was, in fact, crucial to his claim of legal authority for emancipation. Since slavery was protected by the Constitution, the only way that he could free the slaves was as a tactic of war—not as the mission itself. But that carried the risk that when the war ended, so would the justification for freeing the slaves. Late in 1862, Lincoln asked his Attorney General, Edward Bates, for an opinion as to whether slaves freed through a war-related proclamation of emancipation could be re-enslaved once the war was over. Bates had to work through the language of the Dred Scott decision to arrive at an answer, but he finally concluded that they could indeed remain free. Still, a complete end to slavery would require a constitutional amendment. Conflicting advice, to free all slaves, or not free them at all, was presented to Lincoln in public and private. Thomas Nast, a cartoon artist during the Civil War and the late 1800s considered "Father of the American Cartoon", composed many works, including a two-sided spread that showed the transition from slavery into civilization after President Lincoln signed the Proclamation. Nast believed in equal opportunity and equality for all people, including enslaved Africans or free blacks. A mass rally in Chicago on September 7, 1862, demanded immediate and universal emancipation of slaves. A delegation headed by William W. Patton met the president at the White House on September 13. Lincoln had declared in peacetime that he had no constitutional authority to free the slaves. Even used as a war power, emancipation was a risky political act. Public opinion as a whole was against it. There would be strong opposition among Copperhead Democrats and an uncertain reaction from loyal border states. Delaware and Maryland already had a high percentage of free blacks: 91.2% and 49.7%, respectively, in 1860. Lincoln first discussed the proclamation with his cabinet in July 1862. He drafted his "preliminary proclamation" and read it to Secretary of State William Seward, and Secretary of Navy Gideon Welles, on July 13. Seward and Welles were at first speechless, then Seward referred to possible anarchy throughout the South and resulting foreign intervention; Welles apparently said nothing. On July 22, Lincoln presented it to his entire cabinet as something he had determined to do and he asked their opinion on wording. Although Secretary of War Edwin Stanton supported it, Seward advised Lincoln to issue the proclamation after a major Union victory, or else it would appear as if the Union was giving "its last shriek of retreat". Walter Stahr, however, writes, "There are contemporary sources, however, that suggest others were involved in the decision to delay", and Stahr quotes them. In September 1862, the Battle of Antietam gave Lincoln the victory he needed to issue the Preliminary Emancipation Proclamation. In the battle, though the Union suffered heavier losses than the Confederates and General McClellan allowed the escape of Robert E. Lee's retreating troops, Union forces turned back a Confederate invasion of Maryland, eliminating more than a quarter of Lee's army in the process. On September 22, 1862, five days after Antietam, and while residing at the Soldier's Home, Lincoln called his cabinet into session and issued the Preliminary Emancipation Proclamation. According to Civil War historian James M. McPherson, Lincoln told cabinet members, "I made a solemn vow before God, that if General Lee was driven back from Pennsylvania, I would crown the result by the declaration of freedom to the slaves." Lincoln had first shown an early draft of the proclamation to Vice President Hannibal Hamlin, an ardent abolitionist, who was more often kept in the dark on presidential decisions. The final proclamation was issued on January 1, 1863. Although implicitly granted authority by Congress, Lincoln used his powers as Commander-in-Chief of the Army and Navy to issue the proclamation "as a necessary war measure." Therefore, it was not the equivalent of a statute enacted by Congress or a constitutional amendment, because Lincoln or a subsequent president could revoke it. One week after issuing the final Proclamation, Lincoln wrote to Major General John McClernand: "After the commencement of hostilities I struggled nearly a year and a half to get along without touching the 'institution'; and when finally I conditionally determined to touch it, I gave a hundred days fair notice of my purpose, to all the States and people, within which time they could have turned it wholly aside, by simply again becoming good citizens of the United States. They chose to disregard it, and I made the peremptory proclamation on what appeared to me to be a military necessity. And being made, it must stand". Lincoln continued, however, that the states included in the proclamation could "adopt systems of apprenticeship for the colored people, conforming substantially to the most approved plans of gradual emancipation; and ... they may be nearly as well off, in this respect, as if the present trouble had not occurred". He concluded by asking McClernand not to "make this letter public". Initially, the Emancipation Proclamation effectively freed only a small percentage of the slaves, namely those who were behind Union lines in areas not exempted. Most slaves were still behind Confederate lines or in exempted Union-occupied areas. Secretary of State William H. Seward commented, "We show our sympathy with slavery by emancipating slaves where we cannot reach them and holding them in bondage where we can set them free." Had any slave state ended its secession attempt before January 1, 1863, it could have kept slavery, at least temporarily. The Proclamation freed the slaves only in areas of the South that were still in rebellion on January 1, 1863. But as the Union army advanced into the South, slaves fled to behind its lines, and "[s]hortly after issuing the Emancipation Proclamation, the Lincoln administration lifted the ban on enticing slaves into Union lines." These events contributed to the destruction of slavery. The Emancipation Proclamation also allowed for the enrollment of freed slaves into the United States military. During the war nearly 200,000 black men, most of them ex-slaves, joined the Union Army. Their contributions were significant in winning the war. The Confederacy did not allow slaves in their army as soldiers until the last month before its defeat. Though the counties of Virginia that were soon to form West Virginia were specifically exempted from the Proclamation (Jefferson County being the only exception), a condition of the state's admittance to the Union was that its constitution provide for the gradual abolition of slavery (an immediate emancipation of all slaves was also adopted there in early 1865). Slaves in the border states of Maryland and Missouri were also emancipated by separate state action before the Civil War ended. In Maryland, a new state constitution abolishing slavery in the state went into effect on November 1, 1864. The Union-occupied counties of eastern Virginia and parishes of Louisiana, which had been exempted from the Proclamation, both adopted state constitutions that abolished slavery in April 1864. In early 1865, Tennessee adopted an amendment to its constitution prohibiting slavery. The Proclamation was issued in a preliminary version and a final version. The former, issued on September 22, 1862, was a preliminary announcement outlining the intent of the latter, which took effect 100 days later on January 1, 1863, during the second year of the Civil War. The preliminary Emancipation Proclamation was Abraham Lincoln's declaration that all slaves would be permanently freed in all areas of the Confederacy that were still in rebellion on January 1, 1863. The ten affected states were individually named in the final Emancipation Proclamation (South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, Texas, Virginia, Arkansas, North Carolina). Not included were the Union slave states of Maryland, Delaware, Missouri and Kentucky. Also not named was the state of Tennessee, in which a Union-controlled military government had already been set up, based in the capital, Nashville. Specific exemptions were stated for areas also under Union control on January 1, 1863, namely 48 counties that would soon become West Virginia, seven other named counties of Virginia including Berkeley and Hampshire counties, which were soon added to West Virginia, New Orleans and 13 named parishes nearby. Union-occupied areas of the Confederate states where the proclamation was put into immediate effect by local commanders included Winchester, Virginia, Corinth, Mississippi, the Sea Islands along the coasts of the Carolinas and Georgia, Key West, Florida, and Port Royal, South Carolina. On New Year's Eve in 1862, African Americans – enslaved and free – gathered across the United States to hold Watch Night ceremonies for "Freedom's Eve", looking toward the stroke of midnight and the promised fulfillment of the Proclamation. It has been inaccurately claimed that the Emancipation Proclamation did not free a single slave; historian Lerone Bennett Jr. alleged that the proclamation was a hoax deliberately designed not to free any slaves. However, as a result of the Proclamation, most slaves became free during the course of the war, beginning on the day it took effect; eyewitness accounts at places such as Hilton Head Island, South Carolina, and Port Royal, South Carolina record celebrations on January 1 as thousands of blacks were informed of their new legal status of freedom. "Estimates of the number of slaves freed immediately by the Emancipation Proclamation are uncertain. One contemporary estimate put the 'contraband' population of Union-occupied North Carolina at 10,000, and the Sea Islands of South Carolina also had a substantial population. Those 20,000 slaves were freed immediately by the Emancipation Proclamation." This Union-occupied zone where freedom began at once included parts of eastern North Carolina, the Mississippi Valley, northern Alabama, the Shenandoah Valley of Virginia, a large part of Arkansas, and the Sea Islands of Georgia and South Carolina. Although some counties of Union-occupied Virginia were exempted from the Proclamation, the lower Shenandoah Valley and the area around Alexandria were covered. Emancipation was immediately enforced as Union soldiers advanced into the Confederacy. Slaves fled their masters and were often assisted by Union soldiers. On the other hand, Robert Gould Shaw wrote to his mother on September 25, 1862, "So the 'Proclamation of Emancipation' has come at last, or rather, its forerunner.... I suppose you all are very much excited about it. For my part, I can't see what practical good it can do now. Wherever our army has been, there remain no slaves, and the Proclamation will not free them where we don't go." Ten days later, he wrote her again, "Don't imagine, from what I said in my last ... that I thought Mr. Lincoln's 'Emancipation Proclamation' not right ... but still, as a war-measure, I don't see the immediate benefit of it, ... as the slaves are sure of being free at any rate, with or without an Emancipation Act." Booker T. Washington, as a boy of 9 in Virginia, remembered the day in early 1865: As the great day drew nearer, there was more singing in the slave quarters than usual. It was bolder, had more ring, and lasted later into the night. Most of the verses of the plantation songs had some reference to freedom.... [S]ome man who seemed to be a stranger (a United States officer, I presume) made a little speech and then read a rather long paper—the Emancipation Proclamation, I think. After the reading we were told that we were all free, and could go when and where we pleased. My mother, who was standing by my side, leaned over and kissed her children, while tears of joy ran down her cheeks. She explained to us what it all meant, that this was the day for which she had been so long praying, but fearing that she would never live to see. Runaway slaves who had escaped to Union lines had previously been held by the Union Army as "contraband of war" under the Confiscation Acts; when the proclamation took effect, they were told at midnight that they were free to leave. The Sea Islands off the coast of Georgia had been occupied by the Union Navy earlier in the war. The whites had fled to the mainland while the blacks stayed. An early program of Reconstruction was set up for the former slaves, including schools and training. Naval officers read the proclamation and told them they were free. Slaves had been part of the "engine of war" for the Confederacy. They produced and prepared food; sewed uniforms; repaired railways; worked on farms and in factories, shipping yards, and mines; built fortifications; and served as hospital workers and common laborers. News of the Proclamation spread rapidly by word of mouth, arousing hopes of freedom, creating general confusion, and encouraging thousands to escape to Union lines. George Washington Albright, a teenage slave in Mississippi, recalled that like many of his fellow slaves, his father escaped to join Union forces. According to Albright, plantation owners tried to keep the Proclamation from slaves but news of it came through the "grapevine". The young slave became a "runner" for an informal group they called the 4Ls ("Lincoln's Legal Loyal League") bringing news of the proclamation to secret slave meetings at plantations throughout the region. Robert E. Lee saw the Emancipation Proclamation as a way for the Union to bolster the number of soldiers it could place on the field, making it imperative for the Confederacy to increase their own numbers. Writing on the matter after the sack of Fredericksburg, Lee wrote, "In view of the vast increase of the forces of the enemy, of the savage and brutal policy he has proclaimed, which leaves us no alternative but success or degradation worse than death, if we would save the honor of our families from pollution [and] our social system from destruction, let every effort be made, every means be employed, to fill and maintain the ranks of our armies, until God in his mercy shall bless us with the establishment of our independence." The Proclamation was immediately denounced by Copperhead Democrats, who opposed the war and advocated restoring the union by allowing slavery. Horatio Seymour, while running for governor of New York, cast the Emancipation Proclamation as a call for slaves to commit extreme acts of violence on all white southerners, saying it was "a proposal for the butchery of women and children, for scenes of lust and rapine, and of arson and murder, which would invoke the interference of civilized Europe". The Copperheads also saw the Proclamation as an unconstitutional abuse of presidential power. Editor Henry A. Reeves wrote in Greenport's Republican Watchman that "In the name of freedom of Negroes, [the proclamation] imperils the liberty of white men; to test a utopian theory of equality of races which Nature, History and Experience alike condemn as monstrous, it overturns the Constitution and Civil Laws and sets up Military Usurpation in their stead." Racism remained pervasive on both sides of the conflict and many in the North supported the war only as an effort to force the South to stay in the Union. The promises of many Republican politicians that the war was to restore the Union and not about black rights or ending slavery were declared lies by their opponents, who cited the Proclamation. Copperhead David Allen spoke to a rally in Columbiana, Ohio, stating, "I have told you that this war is carried on for the Negro. There is the proclamation of the President of the United States. Now fellow Democrats I ask you if you are going to be forced into a war against your Britheren of the Southern States for the Negro. I answer No!" The Copperheads saw the Proclamation as irrefutable proof of their position and the beginning of a political rise for their members; in Connecticut, H. B. Whiting wrote that the truth was now plain even to "those stupid thickheaded persons who persisted in thinking that the President was a conservative man and that the war was for the restoration of the Union under the Constitution." War Democrats, who rejected the Copperhead position within their party, found themselves in a quandary. While throughout the war they had continued to espouse the racist positions of their party and their disdain of the concerns of slaves, they did see the Proclamation as a viable military tool against the South and worried that opposing it might demoralize troops in the Union army. The question would continue to trouble them and eventually lead to a split within their party as the war progressed. Lincoln further alienated many in the Union two days after issuing the Preliminary Emancipation Proclamation by suspending habeas corpus. His opponents linked these two actions in their claims that he was becoming a despot. In light of this and a lack of military success for the Union armies, many War Democrat voters who had previously supported Lincoln turned against him and joined the Copperheads in the off-year elections held in October and November. In the 1862 elections, the Democrats gained 28 seats in the House as well as the governorship of New York. Lincoln's friend Orville Hickman Browning told the president that the Proclamation and the suspension of habeas corpus had been "disastrous" for his party by handing the Democrats so many weapons. Lincoln made no response. Copperhead William Javis of Connecticut pronounced the election the "beginning of the end of the utter downfall of Abolitionism in the United States". Historians James M. McPherson and Allan Nevins state that though the results looked very troubling, they could be seen favorably by Lincoln; his opponents did well only in their historic strongholds and "at the national level their gains in the House were the smallest of any minority party's in an off-year election in nearly a generation. Michigan, California, and Iowa all went Republican... Moreover, the Republicans picked up five seats in the Senate." McPherson states, "If the election was in any sense a referendum on emancipation and on Lincoln's conduct of the war, a majority of Northern voters endorsed these policies." The initial Confederate response was outrage. The Proclamation was seen as vindication of the rebellion and proof that Lincoln would have abolished slavery even if the states had remained in the Union. In an August 1863 letter to President Lincoln, U.S. Army general Ulysses S. Grant observed that the proclamation's "arming the negro", together with "the emancipation of the negro, is the heavyest [sic] blow yet given the Confederacy. The South rave a greatdeel [sic] about it and profess to be very angry." In May 1863, a few months after the Proclamation took effect, the Confederacy passed a law demanding "full and ample retaliation" against the U.S. for such measures. The Confederacy stated that black U.S. soldiers captured while fighting against the Confederacy would be tried as slave insurrectionists in civil courts—a capital offense with an automatic sentence of death. Less than a year after the law's passage, the Confederates massacred black U.S. soldiers at Fort Pillow. Confederate President Jefferson Davis reacted to the Emancipation Proclamation with outrage and in an address to the Confederate Congress on January 12 threatened to send any U.S. military officer captured in Confederate territory covered by the proclamation to state authorities to be charged with "exciting servile insurrection", which was a capital offense. Confederate General Robert E. Lee called the Proclamation a "savage and brutal policy he has proclaimed, which leaves us no alternative but success or degradation worse than death." However, some Confederates welcomed the Proclamation, because they believed it would strengthen pro-slavery sentiment in the Confederacy and thus lead to greater enlistment of white men into the Confederate army. According to one Confederate cavalry sergeant from Kentucky, "The Proclamation is worth three hundred thousand soldiers to our Government at least.... It shows exactly what this war was brought about for and the intention of its damnable authors." Even some Union soldiers concurred with this view and expressed reservations about the Proclamation, not on principle, but rather because they were afraid it would increase the Confederacy's determination to fight on and maintain slavery. One Union soldier from New York stated worryingly after the Proclamation's issuance, "I know enough of the southern spirit that I think they will fight for the institution of slavery even to extermination." As a result of the Proclamation, the price of slaves in the Confederacy increased in the months after its issuance, with one Confederate from South Carolina opining in 1865 that "now is the time for Uncle to buy some negro women and children...." As Lincoln had hoped, the proclamation turned foreign popular opinion in favor of the Union by gaining the support of anti-slavery countries and countries that had already abolished slavery (especially the developed countries in Europe such as the United Kingdom and France). This shift ended the Confederacy's hopes of gaining official recognition. Since the Emancipation Proclamation made the eradication of slavery an explicit Union war goal, it linked support for the South to support for slavery. Public opinion in Britain would not tolerate support for slavery. As Henry Adams noted, "The Emancipation Proclamation has done more for us than all our former victories and all our diplomacy." In Italy, Giuseppe Garibaldi hailed Lincoln as "the heir of the aspirations of John Brown". On August 6, 1863, Garibaldi wrote to Lincoln: "Posterity will call you the great emancipator, a more enviable title than any crown could be, and greater than any merely mundane treasure". Mayor Abel Haywood, a representative for workers from Manchester, England, wrote to Lincoln saying, "We joyfully honor you for many decisive steps toward practically exemplifying your belief in the words of your great founders: 'All men are created free and equal.'" The Emancipation Proclamation served to ease tensions with Europe over the North's conduct of the war, and combined with the recent failed Southern offensive at Antietam, to remove any practical chance for the Confederacy to receive foreign military intervention in the war. However, in spite of the Emancipation Proclamation, arms sales to the Confederacy through blockade running, from British firms and dealers, continued, with knowledge of the British government. The Confederacy was able to sustain the fight for two more years largely thanks to the weapons supplied by British blockade runners. As a result, the blockade runners operating from Britain were responsible for killing 400,000 additional soldiers and civilians on both sides. Lincoln's Gettysburg Address on November 19, 1863 made indirect reference to the Proclamation and the ending of slavery as a war goal with the phrase "new birth of freedom". The Proclamation solidified Lincoln's support among the rapidly growing abolitionist elements of the Republican Party and ensured that they would not block his renomination in 1864. In December 1863, Lincoln issued his Proclamation of Amnesty and Reconstruction, which dealt with the ways the rebel states could reconcile with the Union. Key provisions required that the states accept the Emancipation Proclamation and thus the freedom of their slaves, and accept the Confiscation Acts, as well as the Act banning slavery in United States territories. Near the end of the war, abolitionists were concerned that the Emancipation Proclamation would be construed solely as a war measure, as Lincoln intended, and would no longer apply once fighting ended. They also were increasingly anxious to secure the freedom of all slaves, not just those freed by the Emancipation Proclamation. Thus pressed, Lincoln staked a large part of his 1864 presidential campaign on a constitutional amendment to abolish slavery throughout the United States. Lincoln's campaign was bolstered by votes in both Maryland and Missouri to abolish slavery in those states. Maryland's new constitution abolishing slavery took effect on November 1, 1864. Slavery in Missouri ended on January 11, 1865, when a state convention approved an ordinance abolishing slavery by a vote of 60-4, and later the same day, Governor Thomas C. Fletcher followed up with his own "Proclamation of Freedom." Winning re-election, Lincoln pressed the lame duck 38th Congress to pass the proposed amendment immediately rather than wait for the incoming 39th Congress to convene. In January 1865, Congress sent to the state legislatures for ratification what became the Thirteenth Amendment, banning slavery in all U.S. states and territories, except as punishment for a crime. The amendment was ratified by the legislatures of enough states by December 6, 1865, and proclaimed 12 days later. There were approximately 40,000 slaves in Kentucky and 1,000 in Delaware who were liberated then. Lincoln's proclamation has been called "one of the most radical emancipations in the history of the modern world." Nonetheless, as over the years American society continued to be deeply unfair towards black people, cynicism towards Lincoln and the Emancipation Proclamation increased. One attack was Lerone Bennett's Forced into Glory: Abraham Lincoln's White Dream (2000), which claimed that Lincoln was a white supremacist who issued the Emancipation Proclamation in lieu of the real racial reforms for which radical abolitionists pushed. To this, one scholarly review states that "Few Civil War scholars take Bennett and DiLorenzo seriously, pointing to their narrow political agenda and faulty research." In his Lincoln's Emancipation Proclamation, Allen C. Guelzo noted professional historians' lack of substantial respect for the document, since it has been the subject of few major scholarly studies. He argued that Lincoln was the U.S.'s "last Enlightenment politician" and as such had "allegiance to 'reason, cold, calculating, unimpassioned reason'.... But the most important among the Enlightenment's political virtues for Lincoln, and for his Proclamation, was prudence". Other historians have given more credit to Lincoln for what he accomplished toward ending slavery and for his own growth in political and moral stature. More might have been accomplished if he had not been assassinated. As Eric Foner wrote: Lincoln was not an abolitionist or Radical Republican, a point Bennett reiterates innumerable times. He did not favor immediate abolition before the war, and held racist views typical of his time. But he was also a man of deep convictions when it came to slavery, and during the Civil War displayed a remarkable capacity for moral and political growth. Kal Ashraf wrote: Perhaps in rejecting the critical dualism—Lincoln as individual emancipator pitted against collective self-emancipators—there is an opportunity to recognise the greater persuasiveness of the combination. In a sense, yes: a racist, flawed Lincoln did something heroic, and not in lieu of collective participation, but next to, and enabled, by it. To venerate a singular 'Great Emancipator' may be as reductive as dismissing the significance of Lincoln's actions. Who he was as a man, no one of us can ever really know. So it is that the version of Lincoln we keep is also the version we make. Dr. Martin Luther King Jr. made many references to the Emancipation Proclamation during the civil rights movement. These include an "Emancipation Proclamation Centennial Address" he gave in New York City on September 12, 1962, in which he placed the Proclamation alongside the Declaration of Independence as an "imperishable" contribution to civilization and added, "All tyrants, past, present and future, are powerless to bury the truths in these declarations...." He lamented that despite a history where the United States "proudly professed the basic principles inherent in both documents," it "sadly practiced the antithesis of these principles." He concluded, "There is but one way to commemorate the Emancipation Proclamation. That is to make its declarations of freedom real; to reach back to the origins of our nation when our message of equality electrified an unfree world, and reaffirm democracy by deeds as bold and daring as the issuance of the Emancipation Proclamation." King's most famous invocation of the Emancipation Proclamation was in a speech from the steps of the Lincoln Memorial at the 1963 March on Washington for Jobs and Freedom (often referred to as the "I Have a Dream" speech). King began the speech saying "Five score years ago, a great American, in whose symbolic shadow we stand, signed the Emancipation Proclamation. This momentous decree came as a great beacon light of hope to millions of Negro slaves who had been seared in the flames of withering injustice. It came as a joyous daybreak to end the long night of their captivity. But one hundred years later, we must face the tragic fact that the Negro still is not free. One hundred years later, the life of the Negro is still sadly crippled by the manacles of segregation and the chains of discrimination." In the early 1960s, Dr. Martin Luther King Jr. and his associates called on President John F. Kennedy to bypass Southern segregationist opposition in the Congress by issuing an executive order to put an end to segregation. This envisioned document was referred to as the "Second Emancipation Proclamation". Kennedy, however, did not issue a second Emancipation Proclamation "and noticeably avoided all centennial celebrations of emancipation." Historian David W. Blight points out that, although the idea of an executive order to act as a second Emancipation Proclamation "has been virtually forgotten," the manifesto that King and his associates produced calling for an executive order showed his "close reading of American politics" and recalled how moral leadership could have an effect on the American public through an executive order. Despite its failure "to spur a second Emancipation Proclamation from the White House, it was an important and emphatic attempt to combat the structured forgetting of emancipation latent within Civil War memory." On June 11, 1963, President Kennedy spoke on national television about civil rights. Kennedy, who had been routinely criticized as timid by some civil rights activists, reminded Americans that two black students had been peacefully enrolled in the University of Alabama with the aid of the National Guard, despite the opposition of Governor George Wallace. John Kennedy called it a "moral issue." Invoking the centennial of the Emancipation Proclamation he said, One hundred years of delay have passed since President Lincoln freed the slaves, yet their heirs, their grandsons, are not fully free. They are not yet freed from the bonds of injustice. They are not yet freed from social and economic oppression. And this Nation, for all its hopes and all its boasts, will not be fully free until all its citizens are free. We preach freedom around the world, and we mean it, and we cherish our freedom here at home, but are we to say to the world, and much more importantly, to each other that this is a land of the free except for the Negroes; that we have no second-class citizens except Negroes; that we have no class or caste system, no ghettoes, no master race except with respect to Negroes? Now the time has come for this Nation to fulfill its promise. The events in Birmingham and elsewhere have so increased the cries for equality that no city or State or legislative body can prudently choose to ignore them. In the same speech, Kennedy announced he would introduce a comprehensive civil rights bill in the United States Congress, which he did a week later. Kennedy pushed for its passage until he was assassinated on November 22, 1963. Historian Peniel E. Joseph holds Lyndon Johnson's ability to get that bill, the Civil Rights Act of 1964, signed into law on July 2, 1964, to have been aided by "the moral forcefulness of the June 11 speech", which had turned "the narrative of civil rights from a regional issue into a national story promoting racial equality and democratic renewal." During the civil rights movement of the 1960s, Lyndon B. Johnson invoked the Emancipation Proclamation, holding it up as a promise yet to be fully implemented. As vice president, while speaking from Gettysburg on May 30, 1963 (Memorial Day), during the centennial year of the Emancipation Proclamation, Johnson connected it directly with the ongoing civil rights struggles of the time, saying "One hundred years ago, the slave was freed. One hundred years later, the Negro remains in bondage to the color of his skin.... In this hour, it is not our respective races which are at stake—it is our nation. Let those who care for their country come forward, North and South, white and Negro, to lead the way through this moment of challenge and decision.... Until justice is blind to color, until education is unaware of race, until opportunity is unconcerned with color of men's skins, emancipation will be a proclamation but not a fact. To the extent that the proclamation of emancipation is not fulfilled in fact, to that extent we shall have fallen short of assuring freedom to the free." As president, Johnson again invoked the proclamation in a speech presenting the Voting Rights Act at a joint session of Congress on Monday, March 15, 1965. This was one week after violence had been inflicted on peaceful civil rights marchers during the Selma to Montgomery marches. Johnson said "it's not just Negroes, but really it's all of us, who must overcome the crippling legacy of bigotry and injustice. And we shall overcome. As a man whose roots go deeply into Southern soil, I know how agonizing racial feelings are. I know how difficult it is to reshape the attitudes and the structure of our society. But a century has passed—more than 100 years—since the Negro was freed. And he is not fully free tonight. It was more than 100 years ago that Abraham Lincoln—a great President of another party—signed the Emancipation Proclamation. But emancipation is a proclamation and not a fact. A century has passed—more than 100 years—since equality was promised, and yet the Negro is not equal. A century has passed since the day of promise, and the promise is unkept. The time of justice has now come, and I tell you that I believe sincerely that no force can hold it back. It is right in the eyes of man and God that it should come, and when it does, I think that day will brighten the lives of every American." In the 1963 episode of The Andy Griffith Show, "Andy Discovers America", Andy asks Barney to explain the Emancipation Proclamation to Opie who is struggling with history at school. Barney brags about his history expertise, yet it is apparent he cannot answer Andy's question. He finally becomes frustrated and explains it is a proclamation for certain people who wanted emancipation. In addition, the Emancipation Proclamation was also a main item of discussion in the movie Lincoln (2012) directed by Steven Spielberg. The Emancipation Proclamation is celebrated around the world, including on stamps of nations such as the Republic of Togo. The United States commemorative was issued on August 16, 1963, the opening day of the Century of Negro Progress Exposition in Chicago, Illinois. Designed by Georg Olden, an initial printing of 120 million stamps was authorized.
[ { "paragraph_id": 0, "text": "The Emancipation Proclamation, officially Proclamation 95, was a presidential proclamation and executive order issued by United States President Abraham Lincoln on January 1, 1863, during the American Civil War. The Proclamation had the effect of changing the legal status of more than 3.5 million enslaved African Americans in the secessionist Confederate states from enslaved to free. As soon as slaves escaped the control of their enslavers, either by fleeing to Union lines or through the advance of federal troops, they were permanently free. In addition, the Proclamation allowed for former slaves to \"be received into the armed service of the United States\". The Emancipation Proclamation was a significant part of the end of slavery in the United States.", "title": "" }, { "paragraph_id": 1, "text": "On September 22, 1862, Lincoln issued the preliminary Emancipation Proclamation. Its third paragraph reads:", "title": "" }, { "paragraph_id": 2, "text": "That on the first day of January, in the year of our Lord, one thousand eight hundred and sixty-three, all persons held as slaves within any State or designated part of a State, the people whereof shall then be in rebellion against the United States, shall be then, thenceforward, and forever free; and the executive government of the United States, including the military and naval authority thereof, will recognize and maintain the freedom of such persons, and will do no act or acts to repress such persons, or any of them, in any efforts they may make for their actual freedom.", "title": "" }, { "paragraph_id": 3, "text": "On January 1, 1863, Lincoln issued the final Emancipation Proclamation. After quoting from the preliminary Emancipation Proclamation, it stated:", "title": "" }, { "paragraph_id": 4, "text": "I, Abraham Lincoln, President of the United States, by virtue of the power in me vested as Commander-in-Chief, of the Army and Navy of the United States in time of actual armed rebellion against authority and government of the United States, and as a fit and necessary war measure for suppressing said rebellion, do ... order and designate as the States and parts of States wherein the people thereof respectively, are this day in rebellion, against the United States, the following, towit:", "title": "" }, { "paragraph_id": 5, "text": "Lincoln then listed the ten states still in rebellion, excluding parts of states under Union control, and continued:", "title": "" }, { "paragraph_id": 6, "text": "I do order and declare that all persons held as slaves within said designated States, and parts of States, are, and henceforward shall be free. ... [S]uch persons of suitable condition, will be received into the armed service of the United States. ... And upon this act, sincerely believed to be an act of justice, warranted by the Constitution, upon military necessity, I invoke the considerate judgment of mankind, and the gracious favor of Almighty God.", "title": "" }, { "paragraph_id": 7, "text": "The proclamation provided that the executive branch, including the Army and Navy, \"will recognize and maintain the freedom of said persons\". Even though it excluded states not in rebellion, as well as parts of Louisiana and Virginia under Union control, it still applied to more than 3.5 million of the 4 million enslaved people in the country. Around 25,000 to 75,000 were immediately emancipated in those regions of the Confederacy where the US Army was already in place. It could not be enforced in the areas still in rebellion, but, as the Union army took control of Confederate regions, the Proclamation provided the legal framework for the liberation of more than three and a half million enslaved people in those regions by the end of the war. The Emancipation Proclamation outraged white Southerners and their sympathizers, who saw it as the beginning of a race war. It energized abolitionists, and undermined those Europeans who wanted to intervene to help the Confederacy. The Proclamation lifted the spirits of African Americans, both free and enslaved. It encouraged many to escape from slavery and flee toward Union lines, where many joined the Union Army. The Emancipation Proclamation became a historic document because it \"would redefine the Civil War, turning it [for the North] from a struggle [solely] to preserve the Union to one [also] focused on ending slavery, and set a decisive course for how the nation would be reshaped after that historic conflict.\"", "title": "" }, { "paragraph_id": 8, "text": "The Emancipation Proclamation was never challenged in court. To ensure the abolition of slavery in all of the U.S., Lincoln also insisted that Reconstruction plans for Southern states require them to enact laws abolishing slavery (which occurred during the war in Tennessee, Arkansas, and Louisiana); Lincoln encouraged border states to adopt abolition (which occurred during the war in Maryland, Missouri, and West Virginia) and pushed for passage of the 13th Amendment. The Senate passed the 13th Amendment by the necessary two-thirds vote on April 8, 1864; the House of Representatives did so on January 31, 1865; and the required three-fourths of the states ratified it on December 6, 1865. The amendment made slavery and involuntary servitude unconstitutional, \"except as a punishment for a crime\".", "title": "" }, { "paragraph_id": 9, "text": "The United States Constitution of 1787 did not use the word \"slavery\" but included several provisions about unfree persons. The Three-Fifths Compromise (in Article I, Section 2) allocated congressional representation based \"on the whole Number of free Persons\" and \"three-fifths of all other Persons\". Under the Fugitive Slave Clause (Article IV, Section 2), \"No person held to Service or Labour in one State\" would become legally free by escaping to another. Article I, Section 9 allowed Congress to pass legislation to outlaw the \"Importation of Persons\", but not until 1808. However, for purposes of the Fifth Amendment—which states that, \"No person shall ... be deprived of life, liberty, or property, without due process of law\"—slaves were understood to be property. Although abolitionists used the Fifth Amendment to argue against slavery, it was made part of the legal basis for treating slaves as property by Dred Scott v. Sandford (1857). Slavery was also supported in law and in practice by a pervasive culture of white supremacy. Nonetheless, between 1777 and 1804, every Northern state provided for the immediate or gradual abolition of slavery. No Southern state did so, and the slave population of the South continued to grow, peaking at almost four million people at the beginning of the Civil War, when most slave states sought to break away from the United States.", "title": "Authority" }, { "paragraph_id": 10, "text": "Lincoln understood that the federal government's power to end slavery in peacetime was limited by the Constitution, which, before 1865, committed the issue to individual states. During the Civil War, however, Lincoln issued the Emancipation Proclamation under his authority as \"Commander in Chief of the Army and Navy\" under Article II, section 2 of the United States Constitution. As such, in the Emancipation Proclamation he claimed to have the authority to free persons held as slaves in those states that were in rebellion \"as a fit and necessary war measure for suppressing said rebellion\". Lincoln also cited the Confiscation Act of 1861 and Confiscation Act of 1862 passed by Congress as sources for his authority in the Preliminary Emancipation Proclamation, but he did not mention these in the Emancipation Proclamation itself. He did not have such authority over the four border slave-holding states that were not in rebellion—Missouri, Kentucky, Maryland and Delaware—so those states were not named in the Proclamation. The fifth border jurisdiction, West Virginia, where slavery remained legal but was in the process of being abolished, was, in January 1863, still part of the legally recognized \"reorganized\" state of Virginia, based in Alexandria, which was in the Union (as opposed to the Confederate state of Virginia, based in Richmond).", "title": "Authority" }, { "paragraph_id": 11, "text": "The Emancipation Proclamation did not free all slaves in the U.S., contrary to a common misconception; it applied in the ten states that were still in rebellion on January 1, 1863, but it did not cover the nearly 500,000 slaves in the slaveholding border states (Missouri, Kentucky, Maryland, and Delaware) or in parts of Virginia and Louisiana that were no longer in rebellion. Those slaves were freed by later separate state and federal actions. The areas covered were \"Arkansas, Texas, Louisiana (except the Parishes of St. Bernard, Plaquemines, Jefferson, St. John, St. Charles, St. James, Ascension, Assumption, Terrebonne, Lafourche, St. Mary, St. Martin, and Orleans, including the city of New Orleans), Mississippi, Alabama, Florida, Georgia, South Carolina, North Carolina, and Virginia (except the forty-eight counties designated as West Virginia, and also the counties of Berkley, Accomac, Northampton, Elizabeth City, York, Princess Ann, and Norfolk, including the cities of Norfolk and Portsmouth).\"", "title": "Coverage" }, { "paragraph_id": 12, "text": "The state of Tennessee had already mostly returned to Union control, under a recognized Union government, so it was not named and was exempted. Virginia was named, but exemptions were specified for the 48 counties then in the process of forming the new state of West Virginia, and seven additional counties and two cities in the Union-controlled Tidewater region of Virginia. Also specifically exempted were New Orleans and 13 named parishes of Louisiana, which were mostly under federal control at the time of the Emancipation Proclamation. These exemptions left unemancipated an additional 300,000 slaves.", "title": "Coverage" }, { "paragraph_id": 13, "text": "The Emancipation Proclamation has been ridiculed, notably by Richard Hofstadter, who wrote that it \"had all the moral grandeur of a bill of lading\" and \"declared free all slaves ... precisely where its effect could not reach\". Disagreeing with Hofstadter, William W. Freehling wrote that Lincoln's asserting his power as Commander-in-Chief to issue the proclamation \"reads not like an entrepreneur's bill for past services but like a warrior's brandishing of a new weapon\".", "title": "Coverage" }, { "paragraph_id": 14, "text": "The Emancipation Proclamation resulted in the emancipation of a substantial percentage of the slaves in the Confederate states as the Union armies advanced through the South and slaves escaped to Union lines, or slave owners fled, leaving slaves behind. The Emancipation Proclamation also committed the Union to ending slavery in addition to preserving the Union.", "title": "Coverage" }, { "paragraph_id": 15, "text": "Although the Emancipation Proclamation had freed most slaves as a war measure, it had not made slavery illegal. Of the states that were exempted from the Emancipation Proclamation, Maryland, Missouri, Tennessee, and West Virginia prohibited slavery before the war ended. In 1863, President Lincoln proposed a moderate plan for the Reconstruction of the captured Confederate State of Louisiana. Only 10 percent of the state's electorate had to take the loyalty oath. The state was also required to accept the Emancipation Proclamation and abolish slavery in its new constitution. By December 1864, the Lincoln plan abolishing slavery had been enacted not only in Louisiana, but also in Arkansas and Tennessee. In Kentucky, Union Army commanders relied on the proclamation's offer of freedom to slaves who enrolled in the Army and provided freedom for an enrollee's entire family; for this and other reasons, the number of slaves in the state fell by more than 70 percent during the war. However, in Delaware and Kentucky, slavery continued to be legal until December 18, 1865, when the Thirteenth Amendment went into effect.", "title": "Coverage" }, { "paragraph_id": 16, "text": "The Fugitive Slave Act of 1850 required individuals to return runaway slaves to their owners. During the war, in May 1861, Union general Benjamin Butler declared that slaves who escaped to Union lines were contraband of war, and accordingly he refused to return them. On May 30, after a cabinet meeting called by President Lincoln, \"Simon Cameron, the secretary of war, telegraphed Butler to inform him that his contraband policy 'is approved.'\" This decision was controversial because it could have been taken to imply recognition of the Confederacy as a separate, independent sovereign state under international law, a notion that Lincoln steadfastly denied. In addition, as contraband, these people were legally designated as \"property\" when they crossed Union lines and their ultimate status was uncertain.", "title": "Background" }, { "paragraph_id": 17, "text": "In December 1861, Lincoln sent his first annual message to Congress (the State of the Union Address, but then typically given in writing and not referred to as such). In it he praised the free labor system for respecting human rights over property rights; he endorsed legislation to address the status of contraband slaves and slaves in loyal states, possibly through buying their freedom with federal money; and he endorsed federal funding of voluntary colonization. In January 1862, Thaddeus Stevens, the Republican leader in the House, called for total war against the rebellion to include emancipation of slaves, arguing that emancipation, by forcing the loss of enslaved labor, would ruin the rebel economy. On March 13, 1862, Congress approved an Act Prohibiting the Return of Slaves, which prohibited \"All officers or persons in the military or naval service of the United States\" from returning fugitive slaves to their owners. Pursuant to a law signed by Lincoln, slavery was abolished in the District of Columbia on April 16, 1862, and owners were compensated.", "title": "Background" }, { "paragraph_id": 18, "text": "On June 19, 1862, Congress prohibited slavery in all current and future United States territories (though not in the states), and President Lincoln quickly signed the legislation. This act effectively repudiated the 1857 opinion of the Supreme Court of the United States in the Dred Scott case that Congress was powerless to regulate slavery in U.S. territories. It also rejected the notion of popular sovereignty that had been advanced by Stephen A. Douglas as a solution to the slavery controversy, while completing the effort first legislatively proposed by Thomas Jefferson in 1784 to confine slavery within the borders of existing states.", "title": "Background" }, { "paragraph_id": 19, "text": "On August 6, 1861, the First Confiscation Act freed the slaves who were employed \"against the Government and lawful authority of the United States.\" On July 17, 1862, the Second Confiscation Act freed the slaves \"within any place occupied by rebel forces and afterwards occupied by forces of the United States.\" The Second Confiscation Act, unlike the First Confiscation Act, explicitly provided that all slaves covered by it would be permanently freed, stating in section 10 that \"all slaves of persons who shall hereafter be engaged in rebellion against the government of the United States, or who shall in any way give aid or comfort thereto, escaping from such persons and taking refuge within the lines of the army; and all slaves captured from such persons or deserted by them and coming under the control of the government of the United States; and all slaves of such person found on [or] being within any place occupied by rebel forces and afterwards occupied by the forces of the United States, shall be deemed captives of war, and shall be forever free of their servitude, and not again held as slaves.\" However, Lincoln's position continued to be that, although Congress lacked the power to free the slaves in rebel-held states, he, as commander in chief, could do so if he deemed it a proper military measure. By this time, in the summer of 1862, Lincoln had drafted the preliminary Emancipation Proclamation, which he issued on September 22, 1862. It declared that, on January 1, 1863, he would free the slaves in states still in rebellion. Lincoln's preliminary Emancipation Proclamation cited both Confiscations Acts as sources for his authority to issue the Emancipation Proclamation, although neither of these acts would be mentioned in the text of the Emancipation Proclamation itself.", "title": "Background" }, { "paragraph_id": 20, "text": "Abolitionists had long been urging Lincoln to free all slaves. In the summer of 1862, Republican editor Horace Greeley of the highly influential New-York Tribune wrote a famous editorial entitled \"The Prayer of Twenty Millions\" demanding a more aggressive attack on the Confederacy and faster emancipation of the slaves: \"On the face of this wide earth, Mr. President, there is not one ... intelligent champion of the Union cause who does not feel ... that the rebellion, if crushed tomorrow, would be renewed if slavery were left in full vigor and that every hour of deference to slavery is an hour of added and deepened peril to the Union.\" Lincoln responded in his open letter to Horace Greeley of August 22, 1862:", "title": "Background" }, { "paragraph_id": 21, "text": "If there be those who would not save the Union, unless they could at the same time save slavery, I do not agree with them. If there be those who would not save the Union unless they could at the same time destroy slavery, I do not agree with them. My paramount object in this struggle is to save the Union, and is not either to save or to destroy slavery. If I could save the Union without freeing any slave I would do it, and if I could save it by freeing all the slaves I would do it; and if I could save it by freeing some and leaving others alone I would also do that. What I do about slavery, and the colored race, I do because I believe it helps to save the Union; and what I forbear, I forbear because I do not believe it would help to save the Union.... I have here stated my purpose according to my view of official duty; and I intend no modification of my oft-expressed personal wish that all men everywhere could be free.", "title": "Background" }, { "paragraph_id": 22, "text": "Lincoln scholar Harold Holzer wrote about Lincoln's letter: \"Unknown to Greeley, Lincoln composed this after he had already drafted a preliminary Emancipation Proclamation, which he had determined to issue after the next Union military victory. Therefore, this letter, was in truth, an attempt to position the impending announcement in terms of saving the Union, not freeing slaves as a humanitarian gesture. It was one of Lincoln's most skillful public relations efforts, even if it has cast longstanding doubt on his sincerity as a liberator.\" Historian Richard Striner argues that \"for years\" Lincoln's letter has been misread as \"Lincoln only wanted to save the Union.\" However, within the context of Lincoln's entire career and pronouncements on slavery this interpretation is wrong, according to Striner. Rather, Lincoln was softening the strong Northern white supremacist opposition to his imminent emancipation by tying it to the cause of the Union. This opposition would fight for the Union but not to end slavery, so Lincoln gave them the means and motivation to do both, at the same time. In effect, then, Lincoln may have already chosen the third option he mentioned to Greeley: \"freeing some and leaving others alone\"; that is, freeing slaves in the states still in rebellion on January 1, 1863, but leaving enslaved those in the border states and Union-occupied areas.", "title": "Background" }, { "paragraph_id": 23, "text": "Nevertheless, in the Preliminary Emancipation Proclamation itself, Lincoln said that he would recommend to Congress that it compensate states that \"adopt, immediate, or gradual abolishment of slavery\". In addition, during the hundred days between September 22, 1862, when he issued the Preliminary Emancipation Proclamation, and January 1, 1863, when he issued the Final Emancipation Proclamation, Lincoln took actions that suggest that he continued to consider the first option he mentioned to Greeley — saving the Union without freeing any slave — a possibility. Historian William W. Freehling wrote, \"From mid-October to mid-November 1862, he sent personal envoys to Louisiana, Tennessee, and Arkansas\". Each of these envoys carried with him a letter from Lincoln stating that if the people of their state desired \"to avoid the unsatisfactory\" terms of the Final Emancipation Proclamation \"and to have peace again upon the old terms\" (i.e., with slavery intact), they should rally \"the largest number of the people possible\" to vote in \"elections of members to the Congress of the United States ... friendly to their object\". Later, in his Annual Message to Congress of December 1, 1862, Lincoln proposed an amendment to the U.S. Constitution providing that any state that abolished slavery before January 1, 1900, would receive compensation from the United States in the form of interest-bearing U.S. bonds. Adoption of this amendment, in theory, could have ended the war without ever permanently ending slavery, because the amendment provided, \"Any State having received bonds ... and afterwards reintroducing or tolerating slavery therein, shall refund to the United States the bonds so received, or the value thereof, and all interest paid thereon\".", "title": "Background" }, { "paragraph_id": 24, "text": "In his 2014 book, Lincoln's Gamble, journalist and historian Todd Brewster asserted that Lincoln's desire to reassert the saving of the Union as his sole war goal was, in fact, crucial to his claim of legal authority for emancipation. Since slavery was protected by the Constitution, the only way that he could free the slaves was as a tactic of war—not as the mission itself. But that carried the risk that when the war ended, so would the justification for freeing the slaves. Late in 1862, Lincoln asked his Attorney General, Edward Bates, for an opinion as to whether slaves freed through a war-related proclamation of emancipation could be re-enslaved once the war was over. Bates had to work through the language of the Dred Scott decision to arrive at an answer, but he finally concluded that they could indeed remain free. Still, a complete end to slavery would require a constitutional amendment.", "title": "Background" }, { "paragraph_id": 25, "text": "Conflicting advice, to free all slaves, or not free them at all, was presented to Lincoln in public and private. Thomas Nast, a cartoon artist during the Civil War and the late 1800s considered \"Father of the American Cartoon\", composed many works, including a two-sided spread that showed the transition from slavery into civilization after President Lincoln signed the Proclamation. Nast believed in equal opportunity and equality for all people, including enslaved Africans or free blacks. A mass rally in Chicago on September 7, 1862, demanded immediate and universal emancipation of slaves. A delegation headed by William W. Patton met the president at the White House on September 13. Lincoln had declared in peacetime that he had no constitutional authority to free the slaves. Even used as a war power, emancipation was a risky political act. Public opinion as a whole was against it. There would be strong opposition among Copperhead Democrats and an uncertain reaction from loyal border states. Delaware and Maryland already had a high percentage of free blacks: 91.2% and 49.7%, respectively, in 1860.", "title": "Background" }, { "paragraph_id": 26, "text": "Lincoln first discussed the proclamation with his cabinet in July 1862. He drafted his \"preliminary proclamation\" and read it to Secretary of State William Seward, and Secretary of Navy Gideon Welles, on July 13. Seward and Welles were at first speechless, then Seward referred to possible anarchy throughout the South and resulting foreign intervention; Welles apparently said nothing. On July 22, Lincoln presented it to his entire cabinet as something he had determined to do and he asked their opinion on wording. Although Secretary of War Edwin Stanton supported it, Seward advised Lincoln to issue the proclamation after a major Union victory, or else it would appear as if the Union was giving \"its last shriek of retreat\". Walter Stahr, however, writes, \"There are contemporary sources, however, that suggest others were involved in the decision to delay\", and Stahr quotes them.", "title": "Drafting and issuance of the proclamation" }, { "paragraph_id": 27, "text": "In September 1862, the Battle of Antietam gave Lincoln the victory he needed to issue the Preliminary Emancipation Proclamation. In the battle, though the Union suffered heavier losses than the Confederates and General McClellan allowed the escape of Robert E. Lee's retreating troops, Union forces turned back a Confederate invasion of Maryland, eliminating more than a quarter of Lee's army in the process.", "title": "Drafting and issuance of the proclamation" }, { "paragraph_id": 28, "text": "On September 22, 1862, five days after Antietam, and while residing at the Soldier's Home, Lincoln called his cabinet into session and issued the Preliminary Emancipation Proclamation. According to Civil War historian James M. McPherson, Lincoln told cabinet members, \"I made a solemn vow before God, that if General Lee was driven back from Pennsylvania, I would crown the result by the declaration of freedom to the slaves.\" Lincoln had first shown an early draft of the proclamation to Vice President Hannibal Hamlin, an ardent abolitionist, who was more often kept in the dark on presidential decisions. The final proclamation was issued on January 1, 1863. Although implicitly granted authority by Congress, Lincoln used his powers as Commander-in-Chief of the Army and Navy to issue the proclamation \"as a necessary war measure.\" Therefore, it was not the equivalent of a statute enacted by Congress or a constitutional amendment, because Lincoln or a subsequent president could revoke it. One week after issuing the final Proclamation, Lincoln wrote to Major General John McClernand: \"After the commencement of hostilities I struggled nearly a year and a half to get along without touching the 'institution'; and when finally I conditionally determined to touch it, I gave a hundred days fair notice of my purpose, to all the States and people, within which time they could have turned it wholly aside, by simply again becoming good citizens of the United States. They chose to disregard it, and I made the peremptory proclamation on what appeared to me to be a military necessity. And being made, it must stand\". Lincoln continued, however, that the states included in the proclamation could \"adopt systems of apprenticeship for the colored people, conforming substantially to the most approved plans of gradual emancipation; and ... they may be nearly as well off, in this respect, as if the present trouble had not occurred\". He concluded by asking McClernand not to \"make this letter public\".", "title": "Drafting and issuance of the proclamation" }, { "paragraph_id": 29, "text": "Initially, the Emancipation Proclamation effectively freed only a small percentage of the slaves, namely those who were behind Union lines in areas not exempted. Most slaves were still behind Confederate lines or in exempted Union-occupied areas. Secretary of State William H. Seward commented, \"We show our sympathy with slavery by emancipating slaves where we cannot reach them and holding them in bondage where we can set them free.\" Had any slave state ended its secession attempt before January 1, 1863, it could have kept slavery, at least temporarily. The Proclamation freed the slaves only in areas of the South that were still in rebellion on January 1, 1863. But as the Union army advanced into the South, slaves fled to behind its lines, and \"[s]hortly after issuing the Emancipation Proclamation, the Lincoln administration lifted the ban on enticing slaves into Union lines.\" These events contributed to the destruction of slavery.", "title": "Drafting and issuance of the proclamation" }, { "paragraph_id": 30, "text": "The Emancipation Proclamation also allowed for the enrollment of freed slaves into the United States military. During the war nearly 200,000 black men, most of them ex-slaves, joined the Union Army. Their contributions were significant in winning the war. The Confederacy did not allow slaves in their army as soldiers until the last month before its defeat.", "title": "Drafting and issuance of the proclamation" }, { "paragraph_id": 31, "text": "Though the counties of Virginia that were soon to form West Virginia were specifically exempted from the Proclamation (Jefferson County being the only exception), a condition of the state's admittance to the Union was that its constitution provide for the gradual abolition of slavery (an immediate emancipation of all slaves was also adopted there in early 1865). Slaves in the border states of Maryland and Missouri were also emancipated by separate state action before the Civil War ended. In Maryland, a new state constitution abolishing slavery in the state went into effect on November 1, 1864. The Union-occupied counties of eastern Virginia and parishes of Louisiana, which had been exempted from the Proclamation, both adopted state constitutions that abolished slavery in April 1864. In early 1865, Tennessee adopted an amendment to its constitution prohibiting slavery.", "title": "Drafting and issuance of the proclamation" }, { "paragraph_id": 32, "text": "The Proclamation was issued in a preliminary version and a final version. The former, issued on September 22, 1862, was a preliminary announcement outlining the intent of the latter, which took effect 100 days later on January 1, 1863, during the second year of the Civil War. The preliminary Emancipation Proclamation was Abraham Lincoln's declaration that all slaves would be permanently freed in all areas of the Confederacy that were still in rebellion on January 1, 1863. The ten affected states were individually named in the final Emancipation Proclamation (South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, Texas, Virginia, Arkansas, North Carolina). Not included were the Union slave states of Maryland, Delaware, Missouri and Kentucky. Also not named was the state of Tennessee, in which a Union-controlled military government had already been set up, based in the capital, Nashville. Specific exemptions were stated for areas also under Union control on January 1, 1863, namely 48 counties that would soon become West Virginia, seven other named counties of Virginia including Berkeley and Hampshire counties, which were soon added to West Virginia, New Orleans and 13 named parishes nearby.", "title": "Implementation" }, { "paragraph_id": 33, "text": "Union-occupied areas of the Confederate states where the proclamation was put into immediate effect by local commanders included Winchester, Virginia, Corinth, Mississippi, the Sea Islands along the coasts of the Carolinas and Georgia, Key West, Florida, and Port Royal, South Carolina.", "title": "Implementation" }, { "paragraph_id": 34, "text": "On New Year's Eve in 1862, African Americans – enslaved and free – gathered across the United States to hold Watch Night ceremonies for \"Freedom's Eve\", looking toward the stroke of midnight and the promised fulfillment of the Proclamation. It has been inaccurately claimed that the Emancipation Proclamation did not free a single slave; historian Lerone Bennett Jr. alleged that the proclamation was a hoax deliberately designed not to free any slaves. However, as a result of the Proclamation, most slaves became free during the course of the war, beginning on the day it took effect; eyewitness accounts at places such as Hilton Head Island, South Carolina, and Port Royal, South Carolina record celebrations on January 1 as thousands of blacks were informed of their new legal status of freedom. \"Estimates of the number of slaves freed immediately by the Emancipation Proclamation are uncertain. One contemporary estimate put the 'contraband' population of Union-occupied North Carolina at 10,000, and the Sea Islands of South Carolina also had a substantial population. Those 20,000 slaves were freed immediately by the Emancipation Proclamation.\" This Union-occupied zone where freedom began at once included parts of eastern North Carolina, the Mississippi Valley, northern Alabama, the Shenandoah Valley of Virginia, a large part of Arkansas, and the Sea Islands of Georgia and South Carolina. Although some counties of Union-occupied Virginia were exempted from the Proclamation, the lower Shenandoah Valley and the area around Alexandria were covered. Emancipation was immediately enforced as Union soldiers advanced into the Confederacy. Slaves fled their masters and were often assisted by Union soldiers.", "title": "Implementation" }, { "paragraph_id": 35, "text": "On the other hand, Robert Gould Shaw wrote to his mother on September 25, 1862, \"So the 'Proclamation of Emancipation' has come at last, or rather, its forerunner.... I suppose you all are very much excited about it. For my part, I can't see what practical good it can do now. Wherever our army has been, there remain no slaves, and the Proclamation will not free them where we don't go.\" Ten days later, he wrote her again, \"Don't imagine, from what I said in my last ... that I thought Mr. Lincoln's 'Emancipation Proclamation' not right ... but still, as a war-measure, I don't see the immediate benefit of it, ... as the slaves are sure of being free at any rate, with or without an Emancipation Act.\"", "title": "Implementation" }, { "paragraph_id": 36, "text": "Booker T. Washington, as a boy of 9 in Virginia, remembered the day in early 1865:", "title": "Implementation" }, { "paragraph_id": 37, "text": "As the great day drew nearer, there was more singing in the slave quarters than usual. It was bolder, had more ring, and lasted later into the night. Most of the verses of the plantation songs had some reference to freedom.... [S]ome man who seemed to be a stranger (a United States officer, I presume) made a little speech and then read a rather long paper—the Emancipation Proclamation, I think. After the reading we were told that we were all free, and could go when and where we pleased. My mother, who was standing by my side, leaned over and kissed her children, while tears of joy ran down her cheeks. She explained to us what it all meant, that this was the day for which she had been so long praying, but fearing that she would never live to see.", "title": "Implementation" }, { "paragraph_id": 38, "text": "Runaway slaves who had escaped to Union lines had previously been held by the Union Army as \"contraband of war\" under the Confiscation Acts; when the proclamation took effect, they were told at midnight that they were free to leave. The Sea Islands off the coast of Georgia had been occupied by the Union Navy earlier in the war. The whites had fled to the mainland while the blacks stayed. An early program of Reconstruction was set up for the former slaves, including schools and training. Naval officers read the proclamation and told them they were free.", "title": "Implementation" }, { "paragraph_id": 39, "text": "Slaves had been part of the \"engine of war\" for the Confederacy. They produced and prepared food; sewed uniforms; repaired railways; worked on farms and in factories, shipping yards, and mines; built fortifications; and served as hospital workers and common laborers. News of the Proclamation spread rapidly by word of mouth, arousing hopes of freedom, creating general confusion, and encouraging thousands to escape to Union lines. George Washington Albright, a teenage slave in Mississippi, recalled that like many of his fellow slaves, his father escaped to join Union forces. According to Albright, plantation owners tried to keep the Proclamation from slaves but news of it came through the \"grapevine\". The young slave became a \"runner\" for an informal group they called the 4Ls (\"Lincoln's Legal Loyal League\") bringing news of the proclamation to secret slave meetings at plantations throughout the region.", "title": "Implementation" }, { "paragraph_id": 40, "text": "Robert E. Lee saw the Emancipation Proclamation as a way for the Union to bolster the number of soldiers it could place on the field, making it imperative for the Confederacy to increase their own numbers. Writing on the matter after the sack of Fredericksburg, Lee wrote, \"In view of the vast increase of the forces of the enemy, of the savage and brutal policy he has proclaimed, which leaves us no alternative but success or degradation worse than death, if we would save the honor of our families from pollution [and] our social system from destruction, let every effort be made, every means be employed, to fill and maintain the ranks of our armies, until God in his mercy shall bless us with the establishment of our independence.\"", "title": "Implementation" }, { "paragraph_id": 41, "text": "The Proclamation was immediately denounced by Copperhead Democrats, who opposed the war and advocated restoring the union by allowing slavery. Horatio Seymour, while running for governor of New York, cast the Emancipation Proclamation as a call for slaves to commit extreme acts of violence on all white southerners, saying it was \"a proposal for the butchery of women and children, for scenes of lust and rapine, and of arson and murder, which would invoke the interference of civilized Europe\". The Copperheads also saw the Proclamation as an unconstitutional abuse of presidential power. Editor Henry A. Reeves wrote in Greenport's Republican Watchman that \"In the name of freedom of Negroes, [the proclamation] imperils the liberty of white men; to test a utopian theory of equality of races which Nature, History and Experience alike condemn as monstrous, it overturns the Constitution and Civil Laws and sets up Military Usurpation in their stead.\"", "title": "Implementation" }, { "paragraph_id": 42, "text": "Racism remained pervasive on both sides of the conflict and many in the North supported the war only as an effort to force the South to stay in the Union. The promises of many Republican politicians that the war was to restore the Union and not about black rights or ending slavery were declared lies by their opponents, who cited the Proclamation. Copperhead David Allen spoke to a rally in Columbiana, Ohio, stating, \"I have told you that this war is carried on for the Negro. There is the proclamation of the President of the United States. Now fellow Democrats I ask you if you are going to be forced into a war against your Britheren of the Southern States for the Negro. I answer No!\" The Copperheads saw the Proclamation as irrefutable proof of their position and the beginning of a political rise for their members; in Connecticut, H. B. Whiting wrote that the truth was now plain even to \"those stupid thickheaded persons who persisted in thinking that the President was a conservative man and that the war was for the restoration of the Union under the Constitution.\"", "title": "Implementation" }, { "paragraph_id": 43, "text": "War Democrats, who rejected the Copperhead position within their party, found themselves in a quandary. While throughout the war they had continued to espouse the racist positions of their party and their disdain of the concerns of slaves, they did see the Proclamation as a viable military tool against the South and worried that opposing it might demoralize troops in the Union army. The question would continue to trouble them and eventually lead to a split within their party as the war progressed.", "title": "Implementation" }, { "paragraph_id": 44, "text": "Lincoln further alienated many in the Union two days after issuing the Preliminary Emancipation Proclamation by suspending habeas corpus. His opponents linked these two actions in their claims that he was becoming a despot. In light of this and a lack of military success for the Union armies, many War Democrat voters who had previously supported Lincoln turned against him and joined the Copperheads in the off-year elections held in October and November.", "title": "Implementation" }, { "paragraph_id": 45, "text": "In the 1862 elections, the Democrats gained 28 seats in the House as well as the governorship of New York. Lincoln's friend Orville Hickman Browning told the president that the Proclamation and the suspension of habeas corpus had been \"disastrous\" for his party by handing the Democrats so many weapons. Lincoln made no response. Copperhead William Javis of Connecticut pronounced the election the \"beginning of the end of the utter downfall of Abolitionism in the United States\".", "title": "Implementation" }, { "paragraph_id": 46, "text": "Historians James M. McPherson and Allan Nevins state that though the results looked very troubling, they could be seen favorably by Lincoln; his opponents did well only in their historic strongholds and \"at the national level their gains in the House were the smallest of any minority party's in an off-year election in nearly a generation. Michigan, California, and Iowa all went Republican... Moreover, the Republicans picked up five seats in the Senate.\" McPherson states, \"If the election was in any sense a referendum on emancipation and on Lincoln's conduct of the war, a majority of Northern voters endorsed these policies.\"", "title": "Implementation" }, { "paragraph_id": 47, "text": "The initial Confederate response was outrage. The Proclamation was seen as vindication of the rebellion and proof that Lincoln would have abolished slavery even if the states had remained in the Union. In an August 1863 letter to President Lincoln, U.S. Army general Ulysses S. Grant observed that the proclamation's \"arming the negro\", together with \"the emancipation of the negro, is the heavyest [sic] blow yet given the Confederacy. The South rave a greatdeel [sic] about it and profess to be very angry.\" In May 1863, a few months after the Proclamation took effect, the Confederacy passed a law demanding \"full and ample retaliation\" against the U.S. for such measures. The Confederacy stated that black U.S. soldiers captured while fighting against the Confederacy would be tried as slave insurrectionists in civil courts—a capital offense with an automatic sentence of death. Less than a year after the law's passage, the Confederates massacred black U.S. soldiers at Fort Pillow.", "title": "Implementation" }, { "paragraph_id": 48, "text": "Confederate President Jefferson Davis reacted to the Emancipation Proclamation with outrage and in an address to the Confederate Congress on January 12 threatened to send any U.S. military officer captured in Confederate territory covered by the proclamation to state authorities to be charged with \"exciting servile insurrection\", which was a capital offense.", "title": "Implementation" }, { "paragraph_id": 49, "text": "Confederate General Robert E. Lee called the Proclamation a \"savage and brutal policy he has proclaimed, which leaves us no alternative but success or degradation worse than death.\"", "title": "Implementation" }, { "paragraph_id": 50, "text": "However, some Confederates welcomed the Proclamation, because they believed it would strengthen pro-slavery sentiment in the Confederacy and thus lead to greater enlistment of white men into the Confederate army. According to one Confederate cavalry sergeant from Kentucky, \"The Proclamation is worth three hundred thousand soldiers to our Government at least.... It shows exactly what this war was brought about for and the intention of its damnable authors.\" Even some Union soldiers concurred with this view and expressed reservations about the Proclamation, not on principle, but rather because they were afraid it would increase the Confederacy's determination to fight on and maintain slavery. One Union soldier from New York stated worryingly after the Proclamation's issuance, \"I know enough of the southern spirit that I think they will fight for the institution of slavery even to extermination.\"", "title": "Implementation" }, { "paragraph_id": 51, "text": "As a result of the Proclamation, the price of slaves in the Confederacy increased in the months after its issuance, with one Confederate from South Carolina opining in 1865 that \"now is the time for Uncle to buy some negro women and children....\"", "title": "Implementation" }, { "paragraph_id": 52, "text": "As Lincoln had hoped, the proclamation turned foreign popular opinion in favor of the Union by gaining the support of anti-slavery countries and countries that had already abolished slavery (especially the developed countries in Europe such as the United Kingdom and France). This shift ended the Confederacy's hopes of gaining official recognition.", "title": "Implementation" }, { "paragraph_id": 53, "text": "Since the Emancipation Proclamation made the eradication of slavery an explicit Union war goal, it linked support for the South to support for slavery. Public opinion in Britain would not tolerate support for slavery. As Henry Adams noted, \"The Emancipation Proclamation has done more for us than all our former victories and all our diplomacy.\" In Italy, Giuseppe Garibaldi hailed Lincoln as \"the heir of the aspirations of John Brown\". On August 6, 1863, Garibaldi wrote to Lincoln: \"Posterity will call you the great emancipator, a more enviable title than any crown could be, and greater than any merely mundane treasure\".", "title": "Implementation" }, { "paragraph_id": 54, "text": "Mayor Abel Haywood, a representative for workers from Manchester, England, wrote to Lincoln saying, \"We joyfully honor you for many decisive steps toward practically exemplifying your belief in the words of your great founders: 'All men are created free and equal.'\" The Emancipation Proclamation served to ease tensions with Europe over the North's conduct of the war, and combined with the recent failed Southern offensive at Antietam, to remove any practical chance for the Confederacy to receive foreign military intervention in the war.", "title": "Implementation" }, { "paragraph_id": 55, "text": "However, in spite of the Emancipation Proclamation, arms sales to the Confederacy through blockade running, from British firms and dealers, continued, with knowledge of the British government. The Confederacy was able to sustain the fight for two more years largely thanks to the weapons supplied by British blockade runners. As a result, the blockade runners operating from Britain were responsible for killing 400,000 additional soldiers and civilians on both sides.", "title": "Implementation" }, { "paragraph_id": 56, "text": "Lincoln's Gettysburg Address on November 19, 1863 made indirect reference to the Proclamation and the ending of slavery as a war goal with the phrase \"new birth of freedom\". The Proclamation solidified Lincoln's support among the rapidly growing abolitionist elements of the Republican Party and ensured that they would not block his renomination in 1864.", "title": "Gettysburg Address" }, { "paragraph_id": 57, "text": "In December 1863, Lincoln issued his Proclamation of Amnesty and Reconstruction, which dealt with the ways the rebel states could reconcile with the Union. Key provisions required that the states accept the Emancipation Proclamation and thus the freedom of their slaves, and accept the Confiscation Acts, as well as the Act banning slavery in United States territories.", "title": "Proclamation of Amnesty and Reconstruction (1863)" }, { "paragraph_id": 58, "text": "Near the end of the war, abolitionists were concerned that the Emancipation Proclamation would be construed solely as a war measure, as Lincoln intended, and would no longer apply once fighting ended. They also were increasingly anxious to secure the freedom of all slaves, not just those freed by the Emancipation Proclamation. Thus pressed, Lincoln staked a large part of his 1864 presidential campaign on a constitutional amendment to abolish slavery throughout the United States. Lincoln's campaign was bolstered by votes in both Maryland and Missouri to abolish slavery in those states. Maryland's new constitution abolishing slavery took effect on November 1, 1864. Slavery in Missouri ended on January 11, 1865, when a state convention approved an ordinance abolishing slavery by a vote of 60-4, and later the same day, Governor Thomas C. Fletcher followed up with his own \"Proclamation of Freedom.\"", "title": "Postbellum" }, { "paragraph_id": 59, "text": "Winning re-election, Lincoln pressed the lame duck 38th Congress to pass the proposed amendment immediately rather than wait for the incoming 39th Congress to convene. In January 1865, Congress sent to the state legislatures for ratification what became the Thirteenth Amendment, banning slavery in all U.S. states and territories, except as punishment for a crime. The amendment was ratified by the legislatures of enough states by December 6, 1865, and proclaimed 12 days later. There were approximately 40,000 slaves in Kentucky and 1,000 in Delaware who were liberated then.", "title": "Postbellum" }, { "paragraph_id": 60, "text": "Lincoln's proclamation has been called \"one of the most radical emancipations in the history of the modern world.\" Nonetheless, as over the years American society continued to be deeply unfair towards black people, cynicism towards Lincoln and the Emancipation Proclamation increased. One attack was Lerone Bennett's Forced into Glory: Abraham Lincoln's White Dream (2000), which claimed that Lincoln was a white supremacist who issued the Emancipation Proclamation in lieu of the real racial reforms for which radical abolitionists pushed. To this, one scholarly review states that \"Few Civil War scholars take Bennett and DiLorenzo seriously, pointing to their narrow political agenda and faulty research.\" In his Lincoln's Emancipation Proclamation, Allen C. Guelzo noted professional historians' lack of substantial respect for the document, since it has been the subject of few major scholarly studies. He argued that Lincoln was the U.S.'s \"last Enlightenment politician\" and as such had \"allegiance to 'reason, cold, calculating, unimpassioned reason'.... But the most important among the Enlightenment's political virtues for Lincoln, and for his Proclamation, was prudence\".", "title": "Critiques" }, { "paragraph_id": 61, "text": "Other historians have given more credit to Lincoln for what he accomplished toward ending slavery and for his own growth in political and moral stature. More might have been accomplished if he had not been assassinated. As Eric Foner wrote:", "title": "Critiques" }, { "paragraph_id": 62, "text": "Lincoln was not an abolitionist or Radical Republican, a point Bennett reiterates innumerable times. He did not favor immediate abolition before the war, and held racist views typical of his time. But he was also a man of deep convictions when it came to slavery, and during the Civil War displayed a remarkable capacity for moral and political growth.", "title": "Critiques" }, { "paragraph_id": 63, "text": "Kal Ashraf wrote:", "title": "Critiques" }, { "paragraph_id": 64, "text": "Perhaps in rejecting the critical dualism—Lincoln as individual emancipator pitted against collective self-emancipators—there is an opportunity to recognise the greater persuasiveness of the combination. In a sense, yes: a racist, flawed Lincoln did something heroic, and not in lieu of collective participation, but next to, and enabled, by it. To venerate a singular 'Great Emancipator' may be as reductive as dismissing the significance of Lincoln's actions. Who he was as a man, no one of us can ever really know. So it is that the version of Lincoln we keep is also the version we make.", "title": "Critiques" }, { "paragraph_id": 65, "text": "Dr. Martin Luther King Jr. made many references to the Emancipation Proclamation during the civil rights movement. These include an \"Emancipation Proclamation Centennial Address\" he gave in New York City on September 12, 1962, in which he placed the Proclamation alongside the Declaration of Independence as an \"imperishable\" contribution to civilization and added, \"All tyrants, past, present and future, are powerless to bury the truths in these declarations....\" He lamented that despite a history where the United States \"proudly professed the basic principles inherent in both documents,\" it \"sadly practiced the antithesis of these principles.\" He concluded, \"There is but one way to commemorate the Emancipation Proclamation. That is to make its declarations of freedom real; to reach back to the origins of our nation when our message of equality electrified an unfree world, and reaffirm democracy by deeds as bold and daring as the issuance of the Emancipation Proclamation.\"", "title": "Legacy in the civil rights era" }, { "paragraph_id": 66, "text": "King's most famous invocation of the Emancipation Proclamation was in a speech from the steps of the Lincoln Memorial at the 1963 March on Washington for Jobs and Freedom (often referred to as the \"I Have a Dream\" speech). King began the speech saying \"Five score years ago, a great American, in whose symbolic shadow we stand, signed the Emancipation Proclamation. This momentous decree came as a great beacon light of hope to millions of Negro slaves who had been seared in the flames of withering injustice. It came as a joyous daybreak to end the long night of their captivity. But one hundred years later, we must face the tragic fact that the Negro still is not free. One hundred years later, the life of the Negro is still sadly crippled by the manacles of segregation and the chains of discrimination.\"", "title": "Legacy in the civil rights era" }, { "paragraph_id": 67, "text": "In the early 1960s, Dr. Martin Luther King Jr. and his associates called on President John F. Kennedy to bypass Southern segregationist opposition in the Congress by issuing an executive order to put an end to segregation. This envisioned document was referred to as the \"Second Emancipation Proclamation\". Kennedy, however, did not issue a second Emancipation Proclamation \"and noticeably avoided all centennial celebrations of emancipation.\" Historian David W. Blight points out that, although the idea of an executive order to act as a second Emancipation Proclamation \"has been virtually forgotten,\" the manifesto that King and his associates produced calling for an executive order showed his \"close reading of American politics\" and recalled how moral leadership could have an effect on the American public through an executive order. Despite its failure \"to spur a second Emancipation Proclamation from the White House, it was an important and emphatic attempt to combat the structured forgetting of emancipation latent within Civil War memory.\"", "title": "Legacy in the civil rights era" }, { "paragraph_id": 68, "text": "On June 11, 1963, President Kennedy spoke on national television about civil rights. Kennedy, who had been routinely criticized as timid by some civil rights activists, reminded Americans that two black students had been peacefully enrolled in the University of Alabama with the aid of the National Guard, despite the opposition of Governor George Wallace.", "title": "Legacy in the civil rights era" }, { "paragraph_id": 69, "text": "John Kennedy called it a \"moral issue.\" Invoking the centennial of the Emancipation Proclamation he said,", "title": "Legacy in the civil rights era" }, { "paragraph_id": 70, "text": "One hundred years of delay have passed since President Lincoln freed the slaves, yet their heirs, their grandsons, are not fully free. They are not yet freed from the bonds of injustice. They are not yet freed from social and economic oppression. And this Nation, for all its hopes and all its boasts, will not be fully free until all its citizens are free. We preach freedom around the world, and we mean it, and we cherish our freedom here at home, but are we to say to the world, and much more importantly, to each other that this is a land of the free except for the Negroes; that we have no second-class citizens except Negroes; that we have no class or caste system, no ghettoes, no master race except with respect to Negroes? Now the time has come for this Nation to fulfill its promise. The events in Birmingham and elsewhere have so increased the cries for equality that no city or State or legislative body can prudently choose to ignore them.", "title": "Legacy in the civil rights era" }, { "paragraph_id": 71, "text": "In the same speech, Kennedy announced he would introduce a comprehensive civil rights bill in the United States Congress, which he did a week later. Kennedy pushed for its passage until he was assassinated on November 22, 1963. Historian Peniel E. Joseph holds Lyndon Johnson's ability to get that bill, the Civil Rights Act of 1964, signed into law on July 2, 1964, to have been aided by \"the moral forcefulness of the June 11 speech\", which had turned \"the narrative of civil rights from a regional issue into a national story promoting racial equality and democratic renewal.\"", "title": "Legacy in the civil rights era" }, { "paragraph_id": 72, "text": "During the civil rights movement of the 1960s, Lyndon B. Johnson invoked the Emancipation Proclamation, holding it up as a promise yet to be fully implemented.", "title": "Legacy in the civil rights era" }, { "paragraph_id": 73, "text": "As vice president, while speaking from Gettysburg on May 30, 1963 (Memorial Day), during the centennial year of the Emancipation Proclamation, Johnson connected it directly with the ongoing civil rights struggles of the time, saying \"One hundred years ago, the slave was freed. One hundred years later, the Negro remains in bondage to the color of his skin.... In this hour, it is not our respective races which are at stake—it is our nation. Let those who care for their country come forward, North and South, white and Negro, to lead the way through this moment of challenge and decision.... Until justice is blind to color, until education is unaware of race, until opportunity is unconcerned with color of men's skins, emancipation will be a proclamation but not a fact. To the extent that the proclamation of emancipation is not fulfilled in fact, to that extent we shall have fallen short of assuring freedom to the free.\"", "title": "Legacy in the civil rights era" }, { "paragraph_id": 74, "text": "As president, Johnson again invoked the proclamation in a speech presenting the Voting Rights Act at a joint session of Congress on Monday, March 15, 1965. This was one week after violence had been inflicted on peaceful civil rights marchers during the Selma to Montgomery marches. Johnson said \"it's not just Negroes, but really it's all of us, who must overcome the crippling legacy of bigotry and injustice. And we shall overcome. As a man whose roots go deeply into Southern soil, I know how agonizing racial feelings are. I know how difficult it is to reshape the attitudes and the structure of our society. But a century has passed—more than 100 years—since the Negro was freed. And he is not fully free tonight. It was more than 100 years ago that Abraham Lincoln—a great President of another party—signed the Emancipation Proclamation. But emancipation is a proclamation and not a fact. A century has passed—more than 100 years—since equality was promised, and yet the Negro is not equal. A century has passed since the day of promise, and the promise is unkept. The time of justice has now come, and I tell you that I believe sincerely that no force can hold it back. It is right in the eyes of man and God that it should come, and when it does, I think that day will brighten the lives of every American.\"", "title": "Legacy in the civil rights era" }, { "paragraph_id": 75, "text": "In the 1963 episode of The Andy Griffith Show, \"Andy Discovers America\", Andy asks Barney to explain the Emancipation Proclamation to Opie who is struggling with history at school. Barney brags about his history expertise, yet it is apparent he cannot answer Andy's question. He finally becomes frustrated and explains it is a proclamation for certain people who wanted emancipation. In addition, the Emancipation Proclamation was also a main item of discussion in the movie Lincoln (2012) directed by Steven Spielberg.", "title": "In popular culture" }, { "paragraph_id": 76, "text": "The Emancipation Proclamation is celebrated around the world, including on stamps of nations such as the Republic of Togo. The United States commemorative was issued on August 16, 1963, the opening day of the Century of Negro Progress Exposition in Chicago, Illinois. Designed by Georg Olden, an initial printing of 120 million stamps was authorized.", "title": "In popular culture" } ]
The Emancipation Proclamation, officially Proclamation 95, was a presidential proclamation and executive order issued by United States President Abraham Lincoln on January 1, 1863, during the American Civil War. The Proclamation had the effect of changing the legal status of more than 3.5 million enslaved African Americans in the secessionist Confederate states from enslaved to free. As soon as slaves escaped the control of their enslavers, either by fleeing to Union lines or through the advance of federal troops, they were permanently free. In addition, the Proclamation allowed for former slaves to "be received into the armed service of the United States". The Emancipation Proclamation was a significant part of the end of slavery in the United States. On September 22, 1862, Lincoln issued the preliminary Emancipation Proclamation. Its third paragraph reads: On January 1, 1863, Lincoln issued the final Emancipation Proclamation. After quoting from the preliminary Emancipation Proclamation, it stated: Lincoln then listed the ten states still in rebellion, excluding parts of states under Union control, and continued: The proclamation provided that the executive branch, including the Army and Navy, "will recognize and maintain the freedom of said persons". Even though it excluded states not in rebellion, as well as parts of Louisiana and Virginia under Union control, it still applied to more than 3.5 million of the 4 million enslaved people in the country. Around 25,000 to 75,000 were immediately emancipated in those regions of the Confederacy where the US Army was already in place. It could not be enforced in the areas still in rebellion, but, as the Union army took control of Confederate regions, the Proclamation provided the legal framework for the liberation of more than three and a half million enslaved people in those regions by the end of the war. The Emancipation Proclamation outraged white Southerners and their sympathizers, who saw it as the beginning of a race war. It energized abolitionists, and undermined those Europeans who wanted to intervene to help the Confederacy. The Proclamation lifted the spirits of African Americans, both free and enslaved. It encouraged many to escape from slavery and flee toward Union lines, where many joined the Union Army. The Emancipation Proclamation became a historic document because it "would redefine the Civil War, turning it [for the North] from a struggle [solely] to preserve the Union to one [also] focused on ending slavery, and set a decisive course for how the nation would be reshaped after that historic conflict." The Emancipation Proclamation was never challenged in court. To ensure the abolition of slavery in all of the U.S., Lincoln also insisted that Reconstruction plans for Southern states require them to enact laws abolishing slavery; Lincoln encouraged border states to adopt abolition and pushed for passage of the 13th Amendment. The Senate passed the 13th Amendment by the necessary two-thirds vote on April 8, 1864; the House of Representatives did so on January 31, 1865; and the required three-fourths of the states ratified it on December 6, 1865. The amendment made slavery and involuntary servitude unconstitutional, "except as a punishment for a crime".
2001-10-19T13:24:37Z
2023-12-31T16:15:42Z
[ "Template:American Civil War", "Template:Reflist", "Template:Cite book", "Template:Cbignore", "Template:Cite magazine", "Template:Reconstruction Era", "Template:Authority control", "Template:Abraham Lincoln series", "Template:Harvnb", "Template:Cite journal", "Template:Webarchive", "Template:Abraham Lincoln", "Template:Use mdy dates", "Template:Page needed", "Template:More citations needed section", "Template:Main", "Template:Open access", "Template:ISBN", "Template:Refbegin", "Template:History of slavery in the United States", "Template:About", "Template:Slavery", "Template:Blockquote", "Template:Juneteenth", "Template:Further", "Template:Wikiquote", "Template:Circa", "Template:Clear", "Template:Dead link", "Template:Commons category", "Template:Short description", "Template:Quote", "Template:Refn", "Template:Librivox book", "Template:Pp-protect", "Template:Wikisource", "Template:Cite web", "Template:Refend", "Template:Cite NIE", "Template:Infobox U.S. Presidential Document", "Template:Cite news", "Template:Cite thesis" ]
https://en.wikipedia.org/wiki/Emancipation_Proclamation
9,516
Erwin Rommel
Johannes Erwin Eugen Rommel (pronounced [ˈɛʁviːn ˈʁɔməl] ; 15 November 1891 – 14 October 1944) was a German Generalfeldmarschall (field marshal) during World War II. Popularly known as the Desert Fox (German: Wüstenfuchs, pronounced [ˈvyːstn̩ˌfʊks] ), he served in the Wehrmacht (armed forces) of Nazi Germany, as well as serving in the Reichswehr of the Weimar Republic, and the army of Imperial Germany. Rommel was injured multiple times in both world wars. Rommel was a highly decorated officer in World War I and was awarded the Pour le Mérite for his actions on the Italian Front. In 1937, he published his classic book on military tactics, Infantry Attacks, drawing on his experiences in that war. In World War II, he commanded the 7th Panzer Division during the 1940 invasion of France. His leadership of German and Italian forces in the North African campaign established his reputation as one of the ablest tank commanders of the war, and earned him the nickname der Wüstenfuchs, "the Desert Fox". Among his British adversaries he had a reputation for chivalry, and his phrase "war without hate" has been uncritically used to describe the North African campaign. A number of historians have since rejected the phrase as a myth and uncovered numerous examples of German war crimes and abuses towards enemy soldiers and native populations in Africa during the conflict. Other historians note that there is no clear evidence Rommel was involved or aware of these crimes, with some pointing out that the war in the desert, as fought by Rommel and his opponents, still came as close to a clean fight as there was in World War II. He later commanded the German forces opposing the Allied cross-channel invasion of Normandy in June 1944. With the Nazis gaining power in Germany, Rommel gradually accepted the new regime. Historians have given different accounts of the specific period and his motivations. He was a supporter of Adolf Hitler, at least until near the end of the war, if not necessarily sympathetic to the party and the paramilitary forces associated with it. In 1944, Rommel was implicated in the 20 July plot to assassinate Hitler. Because of Rommel's status as a national hero, Hitler wanted to eliminate him quietly instead of having him immediately executed, as many other plotters were. Rommel was given a choice between suicide, in return for assurances that his reputation would remain intact and that his family would not be persecuted following his death, or facing a trial that would result in his disgrace and execution; he chose the former and took a cyanide pill. Rommel was given a state funeral, and it was announced that he had succumbed to his injuries from the strafing of his staff car in Normandy. Rommel became a larger-than-life figure in both Allied and Nazi propaganda, and in postwar popular culture. Numerous authors portray him as an apolitical, brilliant commander and a victim of Nazi Germany, although this assessment is contested by other authors as the Rommel myth. Rommel's reputation for conducting a clean war was used in the interest of the West German rearmament and reconciliation between the former enemies – the United Kingdom and the United States on one side and the new Federal Republic of Germany on the other. Several of Rommel's former subordinates, notably his chief of staff Hans Speidel, played key roles in German rearmament and integration into NATO in the postwar era. The German Army's largest military base, the Field Marshal Rommel Barracks, Augustdorf, and a third ship of Lütjens-class destroyer of the German Navy are both named in his honour. His son Manfred Rommel was the longtime mayor of Stuttgart, Germany and namesake of Stuttgart Airport. Rommel was born on 15 November 1891, in Heidenheim, 45 kilometres (28 mi) from Ulm, in the Kingdom of Württemberg, Southern Germany, then part of the German Empire. He was the third of five children to Erwin Rommel Senior (1860–1913) and his wife Helene von Luz, whose father, Karl von Luz, headed the local government council. As a young man, Rommel's father had been an artillery lieutenant. Rommel had one older sister who was an art teacher and his favourite sibling, one older brother named Manfred who died in infancy, and two younger brothers, of whom one became a successful dentist and the other an opera singer. At age 18, Rommel joined the Württemberg Infantry Regiment No. 124 in Weingarten as a Fähnrich (ensign), in 1910, studying at the Officer Cadet School in Danzig. He graduated in November 1911 and was commissioned as a lieutenant in January 1912 and was assigned to the 124th Infantry in Weingarten. He was posted to Ulm in March 1914 to the 49th Field Artillery Regiment, XIII (Royal Württemberg) Corps, as a battery commander. He returned to the 124th when war was declared. While at Cadet School, Rommel met his future wife, 17-year-old Lucia (Lucie) Maria Mollin (1894–1971), of Italian and Polish descent. During World War I, Rommel fought in France as well as in the Romanian (notably at the Second Battle of the Jiu Valley) and Italian campaigns. He successfully employed the tactics of penetrating enemy lines with heavy covering fire coupled with rapid advances, as well as moving forward rapidly to a flanking position to arrive at the rear of hostile positions, to achieve tactical surprise. His first combat experience was on 22 August 1914 as a platoon commander near Verdun, when – catching a French garrison unprepared – Rommel and three men opened fire on them without ordering the rest of his platoon forward. The armies continued to skirmish in open engagements throughout September, as the static trench warfare typical of the First World War was still in the future. For his actions in September 1914 and January 1915, Rommel was awarded the Iron Cross, Second Class. Rommel was promoted to Oberleutnant (first lieutenant) and transferred to the newly created Royal Wurttemberg Mountain Battalion of the Alpenkorps in September 1915, as a company commander. In November 1916 in Danzig, Rommel and Lucia married. In August 1917, his unit was involved in the battle for Mount Cosna, a heavily fortified objective on the border between Hungary and Romania, which they took after two weeks of difficult uphill fighting. The Mountain Battalion was next assigned to the Isonzo front, in a mountainous area in Italy. The offensive, known as the Battle of Caporetto, began on 24 October 1917. Rommel's battalion, consisting of three rifle companies and a machine gun unit, was part of an attempt to take enemy positions on three mountains: Kolovrat, Matajur, and Stol. In two and a half days, from 25 to 27 October, Rommel and his 150 men captured 81 guns and 9,000 men (including 150 officers), at a loss of six dead and 30 wounded. Rommel achieved this remarkable success by taking advantage of the terrain to outflank the Italian forces, attacking from unexpected directions or behind enemy lines, and taking the initiative to attack when he had orders to the contrary. In one instance, the Italian forces, taken by surprise and believing that their lines had collapsed, surrendered after a brief firefight. In this battle, Rommel helped pioneer infiltration tactics, a new form of manoeuvre warfare just being adopted by German armies, and later by foreign armies, and described by some as Blitzkrieg without tanks, though he played no role in the early adoption of Blitzkrieg in World War II. Acting as advance guard in the capture of Longarone on 9 November, Rommel again decided to attack with a much smaller force. Convinced that they were surrounded by an entire German division, the 1st Italian Infantry Division – 10,000 men – surrendered to Rommel. For this and his actions at Matajur, he received the order of Pour le Mérite. In January 1918, Rommel was promoted to Hauptmann (captain) and assigned to a staff position in the 64th Army Corps, where he served for the remainder of the war. Rommel remained with the 124th Regiment until October 1920. The regiment was involved in quelling riots and civil disturbances that were occurring throughout Germany at this time. Wherever possible, Rommel avoided the use of force in these confrontations. In 1919, he was briefly sent to Friedrichshafen on Lake Constance, where he restored order by "sheer force of personality" in the 32nd Internal Security Company, which was composed of rebellious and pro-communist sailors. He decided against storming the nearby city of Lindau, which had been taken by revolutionary communists. Instead, Rommel negotiated with the city council and managed to return it to the legitimate government through diplomatic means. This was followed by his defence of Schwäbisch Gmünd, again bloodless. He was then posted to the Ruhr, where a red army was responsible for fomenting unrest. Historian Raffael Scheck praises Rommel as a coolheaded and moderate mind, exceptional amid the many takeovers of revolutionary cities by regular and irregular units and the associated massive violence. According to Reuth, this period gave Rommel the indelible impression that "Everyone in this Republic was fighting each other," along with the direct experience of people who attempted to convert Germany into a socialist republic on Soviet lines. There are similarities with Hitler's experiences: like Rommel, Hitler had known the solidarity of trench warfare and then had participated in the Reichswehr's suppression of the First and Second Bavarian Soviet Republics. The need for national unity thus became a decisive legacy of the first World War. Brighton notes that while both believed in the Stab-in-the-back myth, Rommel was able to succeed using peaceful methods because he saw the problem in empty stomachs rather than in Judeo-Bolshevism – which right-wing soldiers such as Hitler blamed for the chaos in Germany. On 1 October 1920, Rommel was appointed to a company command with the 13th Infantry Regiment in Stuttgart, a post he held for the next nine years. He was then assigned to an instruction position at the Dresden Infantry School from 1929 to 1933; during this time, in April 1932, he was promoted to major. While at Dresden, he wrote a manual on infantry training, published in 1934. In October 1933, he was promoted to Oberstleutnant (lieutenant colonel) and given his next command, the 3rd Jäger Battalion, 17th Infantry Regiment, stationed at Goslar. Here he first met Hitler, who inspected his troops on 30 September 1934. In September 1935, Rommel was moved to the War Academy in Potsdam as an instructor, serving for the next three years. His book Infanterie greift an (Infantry Attacks), a description of his wartime experiences along with his analysis, was published in 1937. It became a best-seller, which, according to Scheck, later "enormously influenced" many armies of the world; Adolf Hitler was one of many who owned a copy. Hearing of Rommel's reputation as an outstanding military instructor, in February 1937 Hitler assigned him as the War Ministry liaison officer to the Hitler Youth in charge of military training. Here he clashed with Baldur von Schirach, the Hitler Youth leader, over the training that the boys should receive. Trying to fulfill a mission assigned to him by the Ministry of War, Rommel had twice proposed a plan that would have effectively subordinated Hitler Youth to the army, removing it from NSDAP control. That went against Schirach's express wishes. Schirach appealed directly to Hitler; consequently, Rommel was quietly removed from the project in 1938. He had been promoted to Oberst (colonel), on 1 August 1937, and in 1938, following the Anschluss, he was appointed commandant of the Theresian Military Academy at Wiener Neustadt. In October 1938, Hitler specially requested that Rommel be seconded to command the Führerbegleitbatallion (his escort battalion). This unit accompanied Hitler whenever he travelled outside of Germany. During this period, Rommel indulged his interest in engineering and mechanics by learning about the inner workings and maintenance of internal combustion engines and heavy machine guns. He memorised logarithm tables in his spare time and enjoyed skiing and other outdoor sports. Ian F. Beckett writes that by 1938, Rommel drifted towards uncritical acceptance of Nazi regime, quoting Rommel's letter to his wife in which he stated "The German Wehrmacht is the sword of the new German world view" as a reaction to speech by Hitler. During his visit to Switzerland in 1938, Rommel reported that Swiss soldiers who he met showed "remarkable understanding of our Jewish problem". Butler comments that he did share the view (popular in Germany and many European countries during that time) that as a people, the Jews were loyal to themselves rather than the nations which they lived in. Despite this fact, other pieces of evidence show that he considered the Nazi racial ideologies rubbish. Searle comments that Rommel knew the official stand of the regime, but in this case, the phrase was ambiguous and there is no evidence after or before this event that he ever sympathised with the antisemitism of the Nazi movement. Rommel's son Manfred Rommel stated in documentary The Real Rommel, published in 2001 by Channel 4 that his father would "look the other way" when faced with anti-Jewish violence on the streets. According to the documentary, Rommel also requested proof of "Aryan descent" from the Italian boyfriend of his illegitimate daughter Gertrud. According to Remy, during the time Rommel was posted in Goslar, he repeatedly clashed with the SA whose members terrorised the Jews and dissident Goslar citizens. After the Röhm Purge, he mistakenly believed that the worst was over, although restrictions on Jewish businesses were still being imposed and agitation against their community continued. According to Remy, Manfred Rommel recounts that his father knew about and privately disagreed with the government's antisemitism, but by this time, he had not actively campaigned on behalf of the Jews. However, Uri Avnery notes that even when he was a low-ranking officer, he protected the Jews who lived in his district. Manfred Rommel tells the Stuttgarter Nachrichten that their family lived in isolated military lands but knew about the discrimination against the Jews which was occurring on the outside. They could not foresee the enormity of the impending atrocities, about which they only knew much later. At one point, Rommel wrote to his wife that Hitler had a "magnetic, maybe hypnotic, strength" that had its origin in Hitler's belief that he "was called upon by God" and Hitler sometimes "spoke from the depth of his being [...] like a prophet". Rommel was promoted to Generalmajor on 23 August 1939 and assigned as commander of the Führerbegleitbatallion, tasked with guarding Hitler and his field headquarters during the invasion of Poland, which began on 1 September. According to Remy, Rommel's private letters at this time show that he did not understand Hitler's true nature and intentions, as he quickly went from predicting a swift peaceful settlement of tensions to approving Hitler's reaction ("bombs will be retaliated with bombs") to the Gleiwitz incident (a false flag operation staged by Hitler and used as a pretext for the invasion). Hitler took a personal interest in the campaign, often moving close to the front in the Führersonderzug (headquarters train). Rommel attended Hitler's daily war briefings and accompanied him everywhere, making use of the opportunity to observe first-hand the use of tanks and other motorised units. On 26 September Rommel returned to Berlin to set up a new headquarters for his unit in the Reich Chancellery. Rommel briefly returned to occupied Warsaw on 5 October in order to prepare for the German victory parade. In a letter to his wife he claimed that the occupation by Nazi Germany was "probably welcomed with relief" by the inhabitants of the ruined city and that they were "rescued". Following the invasion of Poland, Rommel began lobbying for command of one of Germany's panzer divisions, of which there were then only ten. Rommel's successes in World War I were based on surprise and manoeuvre, two elements for which the new panzer units were ideally suited. Rommel received a promotion to a general's rank from Hitler ahead of more senior officers. Rommel obtained the command he aspired to, despite having been earlier turned down by the army's personnel office, which had offered him command of a mountain division instead. According to Peter Caddick-Adams, he was backed by Hitler, the influential Fourteenth Army commander Wilhelm List (a fellow Württemberger middle-class "military outsider") and likely Heinz Guderian, the commander of XIX Army Corps, as well. Going against military protocol, this promotion added to Rommel's growing reputation as one of Hitler's favoured commanders, although his later outstanding leadership in France quelled complaints about his self-promotion and political scheming. The 7th Panzer Division had recently been converted to an armoured division consisting of 218 tanks in three battalions (thus, one tank regiment, instead of the two assigned to a standard panzer division), with two rifle regiments, a motorcycle battalion, an engineer battalion, and an anti-tank battalion. Upon taking command on 10 February 1940, Rommel quickly set his unit to practising the manoeuvres they would need in the upcoming campaign. The invasion began on 10 May 1940. By the third day Rommel and the advance elements of his division, together with a detachment of the 5th Panzer Division, had reached the Meuse, where they found the bridges had already been destroyed (Guderian and Georg-Hans Reinhardt reached the river on the same day). Rommel was active in the forward areas, directing the efforts to make a crossing, which were initially unsuccessful because of suppressive fire by the French on the other side of the river. Rommel brought up tanks and flak units to provide counter-fire and had nearby houses set on fire to create a smokescreen. He sent infantry across in rubber boats, appropriated the bridging tackle of the 5th Panzer Division, personally grabbed a light machine gun to fight off a French counterattack supported by tanks, and went into the water himself, encouraging the sappers and helping lash together the pontoons. By 16 May Rommel reached Avesnes, and contravening orders, he pressed on to Cateau. That night, the French II Army Corps was shattered and on 17 May, Rommel's forces took 10,000 prisoners, losing 36 men in the process. He was surprised to find out only his vanguard had followed his tempestuous surge. The High Command and Hitler had been extremely nervous about his disappearance, although they awarded him the Knight's Cross. Rommel's (and Guderian's) successes and the new possibilities offered by the new tank arm were welcomed by a small number of generals, but worried and paralysed the rest. On 20 May, Rommel reached Arras. General Hermann Hoth received orders that the town should be bypassed and its British garrison thus isolated. He ordered the 5th Panzer Division to move to the west and the 7th Panzer Division to the east, flanked by the SS Division Totenkopf. The following day, the British launched a counterattack in the Battle of Arras. It failed and the British withdrew. On 24 May, Generaloberst (Colonel General) Gerd von Rundstedt and Generaloberst Günther von Kluge issued a halt order, which Hitler approved. The reason for this decision is still a matter of debate. The halt order was lifted on 26 May. 7th Panzer continued its advance, reaching Lille on 27 May. The Siege of Lille continued until 31 May, when the French garrison of 40,000 men surrendered. Rommel was summoned to Berlin to meet with Hitler. He was the only divisional commander present at the planning session for Fall Rot (Case Red), the second phase of the invasion of France. By this time the Dunkirk evacuation was complete; over 338,000 Allied troops had been evacuated across the Channel, though they had to leave behind all their heavy equipment and vehicles. Rommel, resuming his advance on 5 June, drove for the River Seine to secure the bridges near Rouen. Advancing 100 kilometres (60 mi) in two days, the division reached Rouen to find it defended by three French tanks which managed to destroy a number of German tanks before being taken out. The German force, enraged by this resistance, forbade fire brigades access to the burning district of the old Norman capital, and as a result most of the historic quarter was reduced to ashes. According to David Fraser, Rommel instructed the German artillery to bombard the city as a "fire demonstration". According to one witness report the smoke from burning Rouen was intense enough that it reached Paris. Daniel Allen Butler states that the bridges to the city were already destroyed. After the fall of the city, both black civilians and colonial troops were summarily executed on 9 June by unknown German units. The number of black civilians and prisoners killed is estimated at around 100. According to Butler and Showalter, Rouen fell to the 5th Panzer Division, while Rommel advanced from the Seine towards the Channel. On 10 June, Rommel reached the coast near Dieppe, sending Hoth the message "Bin an der Küste" ("Am on the coast"). On 17 June, 7th Panzer was ordered to advance on Cherbourg, where additional British evacuations were under way. The division advanced 240 km (150 mi) in 24 hours, and after two days of shelling, the French garrison surrendered on 19 June. The speed and surprise that it was consistently able to achieve, to the point at which both the enemy and the Oberkommando des Heeres (OKH; German "High Command of the Army") at times lost track of its whereabouts, earned the 7th Panzers the nickname Gespensterdivision ("ghost division"). After the armistice with the French was signed on 22 June, the division was placed in reserve, being sent first to the Somme and then to Bordeaux to re-equip and prepare for Unternehmen Seelöwe (Operation Sea Lion), the planned invasion of Britain. This invasion was later cancelled, as Germany was not able to acquire the air superiority needed for a successful outcome, while the Kriegsmarine was massively outnumbered by the Royal Navy. On 6 February 1941, Rommel was appointed commander of the new Afrika Korps (Deutsches Afrika Korps; DAK), consisting of the 5th Light Division (later renamed 21st Panzer Division) and of the 15th Panzer Division. He was promoted to Generalleutnant three days later and flew to Tripoli on 12 February. The DAK had been sent to Libya in Operation Sonnenblume to support Italian troops who had been roundly defeated by British Commonwealth forces in Operation Compass. His efforts in the Western Desert Campaign earned Rommel the nickname the "Desert Fox" from journalists on both sides of the war. Allied troops in Africa were commanded by General Archibald Wavell, Commander-in-Chief, Middle East Command. Rommel and his troops were technically subordinate to Italian commander-in-chief General Italo Gariboldi. Disagreeing with the orders of the Oberkommando der Wehrmacht (OKW, German armed forces high command) to assume a defensive posture along the front line at Sirte, Rommel resorted to subterfuge and insubordination to take the war to the British. According to Remy, the General Staff tried to slow him down but Hitler encouraged him to advance—an expression of the conflict that had existed between Hitler and the army leadership since the invasion of Poland. He decided to launch a limited offensive on 24 March with the 5th Light Division, supported by two Italian divisions. This thrust was not anticipated by the British, who had Ultra intelligence showing that Rommel had orders to remain on the defensive until at least May, when the 15th Panzer Division were due to arrive. The British Western Desert Force had meanwhile been weakened by the transfer in mid-February of three divisions for the Battle of Greece. They fell back to Mersa El Brega and started constructing defensive works. After a day of fierce fighting on 31 March, the Germans captured Mersa El Brega. Splitting his force into three groups, Rommel resumed the advance on 3 April. Benghazi fell that night as the British pulled out of the city. Gariboldi, who had ordered Rommel to stay in Mersa El Brega, was furious. Rommel was equally forceful in his response, telling Gariboldi, "One cannot permit unique opportunities to slip by for the sake of trifles." A signal arrived from General Franz Halder reminding Rommel that he was to halt in Mersa El Brega. Knowing Gariboldi could not speak German, Rommel told him the message gave him complete freedom of action. Gariboldi backed down. Throughout the campaign, fuel supply was problematic, as no petrol was available locally; it had to be brought from Europe by tanker and then carried by road to where it was needed. Food and fresh water were also in short supply, and it was difficult to move tanks and other equipment off-road through the sand. Cyrenaica was captured by 8 April, except for the port city of Tobruk, which was besieged on 11 April. The siege of Tobruk was not technically a siege, as the defenders were still able to move supplies and reinforcements into the city via the port. Rommel knew that by capturing the port he could greatly reduce the length of his supply lines and increase his overall port capacity, which was insufficient even for day-to-day operations and only half that needed for offensive operations. The city, which had been heavily fortified by the Italians during their 30-year occupation, was garrisoned by 36,000 Commonwealth troops, commanded by Australian Lieutenant General Leslie Morshead. Hoping to catch the defenders off-guard, Rommel launched a failed attack on 14 April. Rommel requested reinforcements, but the OKW, then completing preparations for Operation Barbarossa, refused. General Friedrich Paulus, head of the Operations Branch of the OKH, arrived on 25 April to review the situation. He was present for a second failed attack on the city on 30 April. On 4 May, Paulus ordered that no further attempts should be made to take Tobruk via a direct assault. Following a failed counter-attack in Operation Brevity in May, Wavell launched Operation Battleaxe on 15 June; this attack was also defeated. The defeat resulted in Churchill replacing Wavell with General Claude Auchinleck as theatre commander. In August, Rommel was appointed commander of the newly created Panzer Army Africa, with Fritz Bayerlein as his chief of staff. The Afrika Korps, comprising the 15th Panzer Division and the 5th Light Division, now reinforced and redesignated 21st Panzer Division, was put under command of Generalleutnant Ludwig Crüwell. In addition to the Afrika Korps, Rommel's Panzer Group had the 90th Light Division and four Italian divisions, three infantry divisions investing Tobruk, and one holding Bardia. The two Italian armoured divisions, formed into the Italian XX Motorized Corps under the command of General Gastone Gambara, were under Italian control. Two months later Hitler decided he must have German officers in better control of the Mediterranean theatre, and appointed Field Marshal Albert Kesselring as Commander in Chief, South. Kesselring was ordered to get control of the air and sea between Africa and Italy. Following his success in Battleaxe, Rommel returned his attention to the capture of Tobruk. He made preparations for a new offensive, to be launched between 15 and 20 November. Meanwhile, Auchinleck reorganised Allied forces and strengthened them to two corps, XXX and XIII, which formed the British Eighth Army. It was placed under the command of Alan Cunningham. Auchinleck launched Operation Crusader, a major offensive to relieve Tobruk, on 18 November 1941. Rommel reluctantly decided on 20 November to call off his planned attack on Tobruk. In four days of heavy fighting, the Eighth Army lost 530 tanks and Rommel only 100. Wanting to exploit the British halt and their apparent disorganisation, on 24 November Rommel counterattacked near the Egyptian border in an operation that became known as the "dash to the wire". Cunningham asked Auchinleck for permission to withdraw into Egypt, but Auchinleck refused, and soon replaced Cunningham as commander of Eighth Army with Major General Neil Ritchie. The German counterattack stalled as it outran its supplies and met stiffening resistance, and was criticised by the German High Command and some of Rommel's staff officers. While Rommel drove into Egypt, the remaining Commonwealth forces east of Tobruk threatened the weak Axis lines there. Unable to reach Rommel for several days, Rommel's Chief of Staff, Siegfried Westphal, ordered the 21st Panzer Division withdrawn to support the siege of Tobruk. On 27 November, the British attack on Tobruk linked up with the defenders, and Rommel, having suffered losses that could not easily be replaced, had to concentrate on regrouping the divisions that had attacked into Egypt. By 7 December, Rommel fell back to a defensive line at Gazala, just west of Tobruk, all the while under heavy attack from the Desert Air Force. The Allies kept up the pressure, and Rommel was forced to retreat all the way back to the starting positions he had held in March, reaching El Agheila in December 1941. The British had retaken almost all of Cyrenaica, but Rommel's retreat dramatically shortened his supply lines. On 5 January 1942, the Afrika Korps received 55 tanks and new supplies and Rommel started planning a counterattack, which he launched on 21 January. Caught by surprise, the Allies lost over 110 tanks and other heavy equipment. The Axis forces retook Benghazi on 29 January and Timimi on 3 February, with the Allies pulling back to a defensive line just before the Tobruk area south of the coastal town of Gazala. Between December 1941 and June 1942, Rommel had excellent information about the disposition and intentions of the Commonwealth forces. Bonner Fellers, US military attaché in Egypt, was sending detailed reports to the US State Department using a compromised code. Following Kesselring's successes in creating local air superiority around the British naval and air bases at Malta in April 1942, an increased flow of supplies reached the Axis forces in Africa. With his forces strengthened, Rommel contemplated a major offensive operation for the end of May. He knew the British were planning offensive operations as well, and he hoped to pre-empt them. Early in the afternoon of 26 May 1942, Rommel attacked first and the Battle of Gazala commenced. Under the cover of darkness, the bulk of Rommel's motorised and armoured forces drove south to skirt the left flank of the British, coming up behind them and attacking to the north the following morning. On 30 May, Rommel resumed the offensive, and on 1 June, Rommel accepted the surrender of some 3,000 Commonwealth soldiers. On 6 June, Rommel's forces assaulted the Free French strongpoint in the Battle of Bir Hakeim, but the defenders continued to thwart the attack until finally evacuating on 10 June. Rommel then shifted his attack north; threatened with being completely cut off, the British began a retreat eastward toward Egypt on 14 June, the so-called "Gazala Gallop". The assault on Tobruk proper began at dawn on 20 June, and the British surrendered at dawn the following day. Rommel's forces captured 32,000 Commonwealth troops, the port, and huge quantities of supplies. Only at the fall of Singapore, earlier that year, had more British Commonwealth troops been captured at one time. On 22 June, Hitler promoted Rommel to Generalfeldmarschall for this victory. Following his success at Gazala and Tobruk, Rommel wanted to seize the moment and not allow 8th Army a chance to regroup. He strongly argued that the Panzerarmee should advance into Egypt and drive on to Alexandria and the Suez Canal, as this would place almost all the Mediterranean coastline in Axis hands and, according to Rommel, potentially lead to the capture from the south of the oil fields in the Caucasus and Middle East. Rommel's success at Tobruk worked against him, as Hitler no longer felt it was necessary to proceed with Operation Herkules, the proposed attack on Malta. Auchinleck relieved Ritchie of command of the Eighth Army on 25 June, and temporarily took command himself. Rommel knew that delay would only benefit the British, who continued to receive supplies at a faster rate than Rommel could hope to achieve. He pressed an attack on the heavily fortified town of Mersa Matruh, which Auchinleck had designated as the fall-back position, surrounding it on 28 June. The fortress fell to the Germans on 29 June. In addition to stockpiles of fuel and other supplies, the British abandoned hundreds of tanks and trucks. Those that were functional were put into service by the Panzerwaffe. Rommel continued his pursuit of the Eighth Army, which had fallen back to heavily prepared defensive positions at El Alamein. This region is a natural choke point, where the Qattara Depression creates a relatively short line to defend that could not be outflanked to the south because of the steep escarpment. During this time Germans prepared numerous propaganda postcards and leaflets for Egyptian and Syrian population urging them to "chase English out of the cities", warning them about "Jewish peril" and with one leaflet printed in 296,000 copies and aimed at Syria stating among others Because Marshal Rommel, at the head of the brave Axis troops, is already rattling the last gates of England's power! Arabs! Help your friends achieve their goal: abolishing the English-Jewish-American tyranny! On 1 July, the First Battle of El Alamein began. Rommel had around 100 available tanks. The Allies were able to achieve local air superiority, with heavy bombers attacking the 15th and 21st Panzers, who had also been delayed by a sandstorm. The 90th Light Division veered off course and were pinned down by South African artillery fire. Rommel continued to attempt to advance for two more days, but repeated sorties by the Desert Air Force meant he could make no progress. On 3 July, he wrote in his diary that his strength had "faded away". Attacks by 21st Panzer on 13 and 14 July were repulsed, and an Australian attack on 16–17 July was held off with difficulty. Throughout the first half of July, Auchinleck concentrated attacks on the Italian 60th Infantry Division Sabratha at Tel el Eisa. The ridge was captured by the 26th Australian Brigade on 16 July. Both sides suffered similar losses throughout the month, but the Axis supply situation remained less favourable. Rommel realised that the tide was turning. A break in the action took place at the end of July as both sides rested and regrouped. Preparing for a renewed drive, the British replaced Auchinleck with General Harold Alexander on 8 August. Bernard Montgomery was made the new commander of Eighth Army that same day. The Eighth Army had initially been assigned to General William Gott, but he was killed when his plane was shot down on 7 August. Rommel knew that a British convoy carrying over 100,000 tons of supplies was due to arrive in September. He decided to launch an attack at the end of August with the 15th and 21st Panzer Division, 90th Light Division, and the Italian XX Motorized Corps in a drive through the southern flank of the El Alamein lines. Expecting an attack sooner rather than later, Montgomery fortified the Alam el Halfa ridge with the 44th Division, and positioned the 7th Armoured Division about 25 kilometres (15 mi) to the south. The Battle of Alam el Halfa was launched on 30 August. The terrain left Rommel with no choice but to follow a similar tactic as he had at previous battles: the bulk of the forces attempted to sweep around from the south while secondary attacks were launched on the remainder of the front. It took much longer than anticipated to get through the minefields in the southern sector, and the tanks got bogged down in unexpected patches of quicksand (Montgomery had arranged for Rommel to acquire a falsified map of the terrain). Under heavy fire from British artillery and aircraft, and in the face of well prepared positions that Rommel could not hope to outflank for lack of fuel, the attack stalled. By 2 September, Rommel realised the battle was unwinnable, and decided to withdraw. On the night of 3 September, the 2nd New Zealand Division and 7th Armoured Division positioned to the north engaged in an assault, but they were repelled in a fierce rearguard action by the 90th Light Division. Montgomery called off further action to preserve his strength and allow for further desert training for his forces. In the attack, Rommel had suffered 2,940 casualties and lost 50 tanks, a similar number of guns, and 400 lorries, vital for supplies and movement. The British losses, except tank losses of 68, were much less, further adding to the numerical inferiority of Panzer Army Africa. The Desert Air Force inflicted the highest proportions of damage on Rommel's forces. He now realised the war in Africa could not be won. Physically exhausted and suffering from a liver infection and low blood pressure, Rommel flew home to Germany to recover his health. General Georg Stumme was left in command in Rommel's absence. Improved decoding by British intelligence (see Ultra) meant that the Allies had advance knowledge of virtually every Mediterranean convoy, and only 30 per cent of shipments were getting through. In addition, Mussolini diverted supplies intended for the front to his garrison at Tripoli and refused to release any additional troops to Rommel. The increasing Allied air superiority and lack of fuel meant Rommel was forced to take a more defensive posture than he would have liked for the second Battle of El Alamein. The German defences to the west of the town included a minefield eight kilometres (five miles) deep with the main defensive line – itself several thousand yards deep – to its west. This, Rommel hoped, would allow his infantry to hold the line at any point until motorised and armoured units in reserve could move up and counterattack any Allied breaches. The British offensive began on 23 October. Stumme, in command in Rommel's absence, died of an apparent heart attack while examining the front on 24 October, and Rommel was ordered to return from his medical leave, arriving on the 25th. Montgomery's intention was to clear a narrow path through the minefield at the northern part of the defences, at the area called Kidney Ridge, with a feint to the south. By the end of 25 October, the 15th Panzer, the defenders in this sector, had only 31 serviceable tanks remaining of their initial force of 119. Rommel brought the 21st Panzer and Ariete Divisions north on 26 October, to bolster the sector. On 28 October, Montgomery shifted his focus to the coast, ordering his 1st and 10th Armoured Divisions to attempt to swing around and cut off Rommel's line of retreat. Meanwhile, Rommel concentrated his attack on the Allied salient at Kidney Ridge, inflicting heavy losses. However, Rommel had only 150 operational tanks remaining, and Montgomery had 800, many of them Shermans. Montgomery, seeing his armoured brigades losing tanks at an alarming rate, stopped major attacks until the early hours of 2 November, when he opened Operation Supercharge, with a massive artillery barrage. Due to heavy losses in tanks, towards the end of the day, Rommel ordered his forces to disengage and begin to withdraw. At midnight, he informed the OKW of his decision, and received a reply directly from Hitler the following afternoon: he ordered Rommel and his troops to hold their position to the last man. Rommel, who believed that the lives of his soldiers should never be squandered needlessly, was stunned. Rommel initially complied with the order, but after discussions with Kesselring and others, he issued orders for a retreat on 4 November. The delay proved costly in terms of his ability to get his forces out of Egypt. He later said the decision to delay was what he most regretted from his time in Africa. Meanwhile, the British 1st and 7th Armoured Division had broken through the German defences and were preparing to swing north and surround the Axis forces. On the evening of the 4th, Rommel finally received word from Hitler authorising the withdrawal. As Rommel attempted to withdraw his forces before the British could cut off his retreat, he fought a series of delaying actions. Heavy rains slowed movements and grounded the Desert Air Force, which aided the withdrawal, yet Rommel's troops were under pressure from the pursuing Eighth Army and had to abandon the trucks of the Italian forces, leaving them behind. Rommel continued to retreat west, aiming for 'Gabes gap' in Tunisia. Kesselring strongly criticised Rommel's decision to retreat all the way to Tunisia, as each airfield the Germans abandoned extended the range of the Allied bombers and fighters. Rommel defended his decision, pointing out that if he tried to assume a defensive position the Allies would destroy his forces and take the airfields anyway; the retreat saved the lives of his remaining men and shortened his supply lines. By now, Rommel's remaining forces fought in reduced strength combat groups, whereas the Allied forces had great numerical superiority and control of the air. On his arrival in Tunisia, Rommel noted with some bitterness the reinforcements, including the 10th Panzer Division, arriving in Tunisia following the Allied invasion of Morocco. Having reached Tunisia, Rommel launched an attack against the U.S. II Corps which was threatening to cut his lines of supply north to Tunis. Rommel inflicted a sharp defeat on the American forces at the Kasserine Pass in February, his last battlefield victory of the war, and his first engagement against the United States Army. Rommel immediately turned back against the British forces, occupying the Mareth Line (old French defences on the Libyan border). While Rommel was at Kasserine at the end of January 1943, the Italian General Giovanni Messe was appointed commander of Panzer Army Africa, renamed the Italo-German Panzer Army in recognition of the fact that it consisted of one German and three Italian corps. Though Messe replaced Rommel, he diplomatically deferred to him, and the two coexisted in what was theoretically the same command. On 23 February Army Group Afrika was created with Rommel in command. It included the Italo-German Panzer Army under Messe (renamed 1st Italian Army) and the German 5th Panzer Army in the north of Tunisia under General Hans-Jürgen von Arnim. The last Rommel offensive in North Africa was on 6 March 1943, when he attacked Eighth Army at the Battle of Medenine. The attack was made with 10th, 15th, and 21st Panzer Divisions. Alerted by Ultra intercepts, Montgomery deployed large numbers of anti-tank guns in the path of the offensive. After losing 52 tanks, Rommel called off the assault. On 9 March he returned to Germany. Command was handed over to General Hans-Jürgen von Arnim. Rommel never returned to Africa. The fighting there continued on for another two months, until 13 May 1943, when Messe surrendered the army group to the Allies. On 23 July 1943, Rommel was moved to Greece as commander of Army Group E to counter a possible British invasion. He arrived in Greece on 25 July but was recalled to Berlin the same day following Mussolini's dismissal from office. This caused the German High Command to review the defensive integrity of the Mediterranean and it was decided that Rommel should be posted to Italy as commander of the newly formed Army Group B. On 16 August 1943, Rommel's headquarters moved to Lake Garda in northern Italy and he formally assumed command of the group, consisting of the 44th Infantry Division, the 26th Panzer Division and the 1st SS Panzer Division Leibstandarte SS Adolf Hitler. When Italy announced its armistice with the Allies on 8 September, Rommel's group took part in Operation Achse, disarming the Italian forces. Hitler met with Rommel and Kesselring to discuss future operations in Italy on 30 September 1943. Rommel insisted on a defensive line north of Rome, while Kesselring was more optimistic and advocated holding a line south of Rome. Hitler preferred Kesselring's recommendation, and therefore revoked his previous decision for the subordination of Kesselring's forces to Rommel's army group. On 19 October, Hitler decided that Kesselring would be the overall commander of the forces in Italy, sidelining Rommel. Rommel had wrongly predicted that the collapse of the German line in Italy would be fast. On 21 November, Hitler gave Kesselring overall command of the Italian theatre, moving Rommel and Army Group B to Normandy in France with responsibility for defending the French coast against the long anticipated Allied invasion. On 4 November 1943, Rommel became General Inspector of the Western Defences. He was given a staff that befitted an army group commander, and the powers to travel, examine and make suggestions on how to improve the defences. Hitler, who was having a disagreement with him over military matters, intended to use Rommel as a psychological trump card. There was broad disagreement in the German High Command as to how best to meet the expected allied invasion of Northern France. The Commander-in-Chief West, Gerd von Rundstedt, believed there was no way to stop the invasion near the beaches because of the Allied navies' firepower, as had been experienced at Salerno. He argued that the German armour should be held in reserve well inland near Paris, where they could be used to counter-attack in force in a more traditional military doctrine. The allies could be allowed to extend themselves deep into France, where a battle for control would be fought, allowing the Germans to envelop the allied forces in a pincer movement, cutting off their avenue of retreat. He feared the piecemeal commitment of their armoured forces would cause them to become caught in a battle of attrition which they could not hope to win. The notion of holding the armour inland to use as a mobile reserve force from which they could mount a powerful counterattack applied the classic use of armoured formations as seen in France in 1940. These tactics were still effective on the Eastern Front, where control of the air was important but did not dominate the action. Rommel's own experiences at the end of the North African campaign revealed to him that the Germans would not be able to preserve their armour from air attack for this type of massed assault. Rommel believed their only opportunity would be to oppose the landings directly at the beaches, and to counterattack there before the invaders could become well established. Though there had been some defensive positions established and gun emplacements made, the Atlantic Wall was a token defensive line. Rundstedt had confided to Rommel that it was for propaganda purposes only. Upon arriving in Northern France Rommel was dismayed by the lack of completed works. According to Ruge, Rommel was in a staff position and could not issue orders, but he took every effort to explain his plan to commanders down to the platoon level, who took up his words eagerly, but "more or less open" opposition from the above slowed down the process. Rundstedt intervened and supported Rommel's request for being made a commander. It was granted on 15 January 1944. He and his staff set out to improve the fortifications along the Atlantic Wall with great energy and engineering skill. This was a compromise: Rommel now commanded the 7th and 15th armies; he also had authority over a 20-kilometer-wide strip of coastal land between Zuiderzee and the mouth of the Loire. The chain of command was convoluted: the air force and navy had their own chiefs, as did the South and Southwest France and the Panzer group; Rommel also needed Hitler's permissions to use the tank divisions. Rommel had millions of mines laid and thousands of tank traps and obstacles set up on the beaches and throughout the countryside, including in fields suitable for glider aircraft landings, the so-called Rommel's asparagus (the Allies would later counter these with Hobart's Funnies). In April 1944, Rommel promised Hitler that the preparations would be complete by 1 May, a promise he failed to deliver. By the time of the Allied invasion, the preparations were far from finished. The quality of some of the troops manning them was poor and many bunkers lacked sufficient stocks of ammunition. Rundstedt expected the Allies to invade in the Pas-de-Calais because it was the shortest crossing point from Britain, its port facilities were essential to supplying a large invasion force, and the distance from Calais to Germany was relatively short. Rommel and Hitler's views on the matter is a matter of debate between authors, with both seeming to change their positions. Hitler vacillated between the two strategies. In late April, he ordered the I SS Panzer Corps placed near Paris, far enough inland to be useless to Rommel, but not far enough for Rundstedt. Rommel moved those armoured formations under his command as far forward as possible, ordering General Erich Marcks, commanding the 84th Corps defending the Normandy section, to move his reserves into the frontline. Rundstedt was willing to delegate a majority of the responsibilities to Rommel (the central reserve was Rundstedt's idea but he did not oppose some form of coastal defence), Rommel's strategy of an armour-supported coastal defence line was opposed by some officers, most notably Leo Geyr von Schweppenburg, who was supported by Guderian. Hitler compromised and gave Rommel three divisions (the 2nd, the 21st and the 116th Panzer), let Rundstedt retain four and turned the other three to Army Group G, pleasing no one. The Allies staged elaborate deceptions for D-Day (see Operation Fortitude), giving the impression that the landings would be at Calais. Although Hitler himself expected a Normandy invasion for a while, Rommel and most Army commanders in France believed there would be two invasions, with the main invasion coming at the Pas-de-Calais. Rommel drove defensive preparations all along the coast of Northern France, particularly concentrating fortification building in the River Somme estuary. By D-Day on 6 June 1944 nearly all the German staff officers, including Hitler's staff, believed that Pas-de-Calais was going to be the main invasion site, and continued to believe so even after the landings in Normandy had occurred. The 5 June storm in the channel seemed to make a landing very unlikely, and a number of the senior officers left their units for training exercises and various other efforts. On 4 June the chief meteorologist of the 3 Air Fleet reported that weather in the channel was so poor there could be no landing attempted for two weeks. On 5 June, Rommel left France and on 6 June, he was at home celebrating his wife's 50th birthday. He was recalled and returned to his headquarters at 10 pm. Meanwhile, earlier in the day, Rundstedt had requested the reserves be transferred to his command. At 10 am Keitel advised that Hitler declined to release the reserves but that Rundstedt could move the 12th SS Panzer Division Hitlerjugend closer to the coast, with the Panzer-Lehr-Division placed on standby. Later in the day, Rundstedt received authorisation to move additional units in preparation for a counterattack, which Rundstedt decided to launch on 7 June. Upon arrival, Rommel concurred with the plan. By nightfall, Rundstedt, Rommel and Speidel continued to believe that the Normandy landing might have been a diversionary attack, as the Allied deception measures still pointed towards Calais. The 7 June counterattack did not take place because Allied air bombardments prevented the 12th SS's timely arrival. All this made the German command structure in France in disarray during the opening hours of the D-Day invasion. The Allies secured five beachheads by nightfall of 6 June, landing 155,000 troops. The Allies pushed ashore and expanded their beachhead despite strong German resistance. Rommel believed that if his armies pulled out of range of Allied naval fire, it would give them a chance to regroup and re-engage them later with a better chance of success. While he managed to convince Rundstedt, they still needed to win over Hitler. At a meeting with Hitler at his Wolfsschlucht II headquarters in Margival in northern France on 17 June, Rommel warned Hitler about the inevitable collapse in the German defences, but was rebuffed and told to focus on military operations. By mid-July the German position was crumbling. On 17 July 1944, as Rommel was returning from visiting the headquarters of the I SS Panzer Corps, a fighter plane piloted by either Charley Fox of 412 Squadron RCAF, Jacques Remlinger of No. 602 Squadron RAF, or Johannes Jacobus le Roux of No. 602 Squadron RAF strafed his staff car near Sainte-Foy-de-Montgommery. The driver sped up and attempted to get off the main roadway, but a 20 mm round shattered his left arm, causing the vehicle to veer off the road and crash into trees. Rommel was thrown from the car, suffering injuries to the left side of his face from glass shards and three fractures to his skull. He was hospitalised with major head injuries (assumed to be almost certainly fatal). The role that Rommel played in the military's resistance against Hitler or the 20 July plot is difficult to ascertain, as most of the leaders who were directly involved did not survive and limited documentation on the conspirators' plans and preparations exists. One piece of evidence that points to the possibility that Rommel came to support the assassination plan was General Eberbach's confession to his son (eavesdropped on by British agencies) while in British captivity which stated that Rommel explicitly said to him that Hitler and his close associates had to be killed because this would be the only way out for Germany. This conversation occurred about a month before Rommel was coerced into suicide. Other notable evidence includes the papers of Rudolf Hartmann (who survived the later purge) and Carl-Heinrich von Stülpnagel, who were among the leaders of the military resistance (alongside Rommel's chief of staff General Hans Speidel, Colonel Karl-Richard Koßmann, Colonel Eberhard Finckh and Lieutenant Colonel Caesar von Hofacker). These papers, accidentally discovered by historian Christian Schweizer in 2018 while doing research on Rudolf Hartmann, include Hartmann's eyewitness account of a conversation between Rommel and Stülpnagel in May 1944, as well as photos of the mid-May 1944 meeting between the inner circle of the resistance and Rommel at Koßmann's house. According to Hartmann, by the end of May, in another meeting at Hartmann's quarters in Mareil–Marly, Rommel showed "decisive determination" and clear approval of the inner circle's plan. In a post-war account by Karl Strölin, three of Rommel's friends—the Oberbürgermeister of Stuttgart, Strölin (who had served with Rommel in the First World War), Alexander von Falkenhausen and Stülpnagel—began efforts to bring Rommel into the anti-Hitler conspiracy in early 1944. According to Strölin, sometime in February, Rommel agreed to lend his support to the resistance. On 15 April 1944, Rommel's new chief of staff, Hans Speidel, arrived in Normandy and reintroduced Rommel to Stülpnagel. Speidel had previously been connected to Carl Goerdeler, the civilian leader of the resistance, but not to the plotters led by Claus von Stauffenberg, and came to Stauffenberg's attention only upon his appointment to Rommel's headquarters. The conspirators felt they needed the support of a field marshal on active duty. Erwin von Witzleben, who would have become commander-in-chief of the Wehrmacht had the plot succeeded, was a field marshal, but had been inactive since 1942. The conspirators gave instructions to Speidel to bring Rommel into their circle. Speidel met with former foreign minister Konstantin von Neurath and Strölin on 27 May in Germany, ostensibly at Rommel's request, although the latter was not present. Neurath and Strölin suggested opening immediate surrender negotiations in the West, and, according to Speidel, Rommel agreed to further discussions and preparations. Around the same timeframe, the plotters in Berlin were not aware that Rommel had allegedly decided to take part in the conspiracy. On 16 May, they informed Allen Dulles, through whom they hoped to negotiate with the Western Allies, that Rommel could not be counted on for support. At least initially, Rommel opposed assassinating Hitler. According to some authors, he gradually changed his attitude. After the war, his widow—among others—maintained that Rommel believed an assassination attempt would spark civil war in Germany and Austria, and Hitler would have become a martyr for a lasting cause. Instead, Rommel reportedly suggested that Hitler be arrested and brought to trial for his crimes; he did not attempt to implement this plan when Hitler visited Margival, France, on 17 June. The arrest plan would have been highly improbable as Hitler's security was extremely tight. Rommel would have known this, having commanded Hitler's army protection detail in 1939. He was in favour of peace negotiations and repeatedly urged Hitler to negotiate with the Allies which is dubbed by some as "hopelessly naive" considering no one would trust Hitler. "As naive as it was idealistic, the attitude he showed to the man he had sworn loyalty". According to Reuth, the reason Lucie Rommel did not want her husband to be associated with any conspiracy was that even after the war, the German population neither grasped nor wanted to comprehend the reality of the genocide, thus conspirators were still treated as traitors and outcasts. On the other hand, the resistance depended on the reputation of Rommel to win over the population. Some officers who had worked with Rommel also recognised the relationship between Rommel and the resistance: Westphal said that Rommel did not want any more senseless sacrifices. Butler, using Ruge's recollections, reports that when told by Hitler himself that "no one will make peace with me", Rommel told Hitler that if he was the obstacle for peace, he should resign or kill himself, but Hitler insisted on fanatical defence. Reuth, based on Jodl's testimony, reports that Rommel forcefully presented the situation and asked for political solutions from Hitler, who rebuffed that Rommel should leave politics to him. Brighton comments that Rommel seemed devoted, even though he did not have much faith in Hitler anymore considering he kept informing Hitler in person and by letter about his changing beliefs despite facing a military dilemma as well as a personal struggle. Lieb remarks that Rommel's attitude in describing the situation honestly and requiring political solutions was almost without precedent and contrary to the attitude of many other generals. Remy comments that Rommel put himself and his family (which he had briefly considered evacuating to France, but refrained from doing so) at risk for the resistance out of a combination of his concern for the fate of Germany, his indignation at atrocities and the influence of people around him. On 15 July, Rommel wrote a letter to Hitler giving him a "last chance" to end the hostilities with the Western Allies, urging Hitler to "draw the proper conclusions without delay". What Rommel did not know was that the letter took two weeks to reach Hitler because of Kluge's precautions. Various authors report that many German generals in Normandy, including some SS officers like Hausser, Bittrich, Dietrich (a hard-core Nazi and Hitler's long-time supporter) and Rommel's former opponent Geyr von Schweppenburg, pledged support to him even against Hitler's orders, while Kluge supported him with much hesitation. Rundstedt encouraged Rommel to carry out his plans but refused to do anything himself, remarking that it had to be a man who was still young and loved by the people, while Erich von Manstein was also approached by Rommel but categorically refused, although he did not report them to Hitler either. Peter Hoffmann reports that he also attracted into his orbit officials who had previously refused to support the conspiracy, like Julius Dorpmüller and Karl Kaufmann (according to Russell A. Hart, reliable details of the conversations are now lost, although they certainly met). On 17 July 1944, Rommel was incapacitated by an Allied air attack, which many authors describe as a fateful event that drastically altered the outcome of the bomb plot. Writer Ernst Jünger commented: "The blow that felled Rommel ... robbed the plan of the shoulders that were to be entrusted the double weight of war and civil war - the only man who had enough naivety to counter the simple terror that those he was about to go against possessed." After the failed bomb attack of 20 July, many conspirators were arrested and the dragnet expanded to thousands. Rommel was first implicated when Stülpnagel, after his suicide attempt, repeatedly muttered "Rommel" in delirium. Under torture, Hofacker named Rommel as one of the participants. Additionally, Goerdeler had written down Rommel's name on a list as potential Reich President (according to Stroelin. They had not managed to announce this intention to Rommel yet and he probably never heard of it until the end of his life). On 27 September, Martin Bormann submitted to Hitler a memorandum which claimed that "the late General Stülpnagel, Colonel Hofacker, Kluge's nephew who has been executed, Lieutenant Colonel Rathgens, and several ... living defendants have testified that Field Marshal Rommel was perfectly in the picture about the assassination plan and has promised to be at the disposal of the New Government." Gestapo agents were sent to Rommel's house in Ulm and placed him under surveillance. Historian Peter Lieb considers the memorandum, as well as Eberbach's conversation and the testimonies of surviving resistance members (including Hartmann), to be the three key sources that indicate Rommel's support of the assassination plan. He further notes that while Speidel had an interest in promoting his own post-war career, his testimonies should not be dismissed, considering his bravery as an early resistance figure. Remy writes that even more important than Rommel's attitude to the assassination is the fact Rommel had his own plan to end the war. He began to contemplate this plan some months after El Alamein and carried it out with a lonely decision and conviction, and in the end, had managed to bring military leaders in the West to his side. Rommel's case was turned over to the "Court of Military Honour"—a drumhead court-martial convened to decide the fate of officers involved in the conspiracy. The court included Generalfeldmarschall Wilhelm Keitel, Generalfeldmarschall Gerd von Rundstedt, Generaloberst Heinz Guderian, General der Infanterie Walther Schroth and Generalleutnant Karl-Wilhelm Specht, with General der Infanterie Karl Kriebel and Generalleutnant Heinrich Kirchheim (whom Rommel had fired after Tobruk in 1941) as deputy members and Generalmajor Ernst Maisel as protocol officer. The Court acquired information from Speidel, Hofacker and others that implicated Rommel, with Keitel and Ernst Kaltenbrunner assuming that he had taken part in the subversion. Keitel and Guderian then made the decision that favoured Speidel's case and at the same time shifted the blame to Rommel. By normal procedure, this would lead to Rommel's being brought to Roland Freisler's People's Court, a kangaroo court that always decided in favour of the prosecution. However, Hitler knew that having Rommel branded and executed as a traitor would severely damage morale on the home front. He thus decided to offer Rommel the chance to take his own life. Two generals from Hitler's headquarters, Wilhelm Burgdorf and Ernst Maisel, visited Rommel at his home on 14 October 1944. Burgdorf informed him of the charges against him and offered him three options: (a.) he could choose to defend himself personally in front of Hitler in Berlin, or if he refused to do so (which would be taken as an admission of guilt); (b.) he could face the People's Court (which would have been tantamount to a death sentence), or (c.) choose death by suicide. In the former case (b.), his family would have suffered even before the all-but-certain conviction and execution, and his staff would have been arrested and executed as well. In the latter case (c.), the government would claim that he died a hero and bury him with full military honours, and his family would receive full pension payments. In support of the suicide option, Burgdorf had brought a cyanide capsule. Rommel chose suicide, and explained his decision to his wife and son. Wearing his Afrika Korps jacket and carrying his field marshal's baton, he got into Burgdorf's car, driven by SS-Stabsscharführer Heinrich Doose, and was driven out of the village. After stopping, Doose and Maisel walked away from the car leaving Rommel with Burgdorf. Five minutes later Burgdorf gestured to the two men to return to the car, and Doose noticed that Rommel was slumped over, having taken the cyanide. He died before being taken to the Wagner-Schule field hospital. Ten minutes later, the group telephoned Rommel's wife to inform her of his death. The official notice of Rommel's death as reported to the public stated that he had died of either a heart attack or a cerebral embolism—a complication of the skull fractures he had suffered in the earlier strafing of his staff car. To strengthen the story, Hitler ordered an official day of mourning in commemoration of his death. As promised, Rommel was given a state funeral but it was held in Ulm instead of Berlin as had been requested by Rommel. Hitler sent Field Marshal Rundstedt (who was unaware that Rommel had died as a result of Hitler's orders) as his representative to the funeral. The truth behind Rommel's death became known to the Allies when intelligence officer Charles Marshall interviewed Rommel's widow, Lucia Rommel, as well as from a letter by Rommel's son Manfred in April 1945. Rommel's grave is located in Herrlingen, a short distance west of Ulm. For decades after the war on the anniversary of his death, veterans of the Africa campaign, including former opponents, would gather at his tomb in Herrlingen. On the Italian front in the First World War, Rommel was a successful tactician in fast-developing mobile battle and this shaped his subsequent style as a military commander. He found that taking initiative and not allowing the enemy forces to regroup led to victory. Some authors argue that his enemies were often less organised, second-rate, or depleted, and his tactics were less effective against adequately led, trained and supplied opponents and proved insufficient in the later years of the war. Others point out that through his career, he frequently fought while out-numbered and out-gunned, sometimes overwhelmingly so, while having to deal with internal opponents in Germany who hoped that he would fail. Rommel is praised by numerous authors as a great leader of men. The historian and journalist Basil Liddell Hart concludes that he was a strong leader worshipped by his troops, respected by his adversaries and deserving to be named as one of the "Great Captains of History". Owen Connelly concurs, writing that "No better exemplar of military leadership can be found" and quoting Friedrich von Mellenthin on the inexplicable mutual understanding that existed between Rommel and his troops. Hitler, though, remarked that, "Unfortunately Field-Marshal Rommel is a very great leader full of drive in times of success, but an absolute pessimist when he meets the slightest problems." Telp criticises Rommel for not extending the benevolence he showed in promoting his own officers' careers to his peers, whom he ignored or slighted in his reports. Taking his opponents by surprise and creating uncertainty in their minds were key elements in Rommel's approach to offensive warfare: he took advantage of sand storms and the dark of night to conceal the movement of his forces. He was aggressive and often directed battle from the front or piloted a reconnaissance aircraft over the lines to get a view of the situation. When the British mounted a commando raid deep behind German lines in an effort to kill Rommel and his staff on the eve of their Crusader offensive, Rommel was indignant that the British expected to find his headquarters 400 kilometres (250 miles) behind his front. Mellenthin and Harald Kuhn write that at times in North Africa his absence from a position of communication made command of the battles of the Afrika Korps difficult. Mellenthin lists Rommel's counterattack during Operation Crusader as one such instance. Butler concurred, saying that leading from the front is a good concept but Rommel took it so far – he frequently directed the actions of a single company or battalion – that he made communication and coordination between units problematic, as well as risking his life to the extent that he could easily have been killed even by his own artillery. Albert Kesselring also complained about Rommel cruising about the battlefield like a division or corps commander; but Gause and Westphal, supporting Rommel, replied that in the African desert only this method would work and that it was useless to try to restrain Rommel anyway. His staff officers, although admiring towards their leader, complained about the self-destructive Spartan lifestyle that made life harder, diminished his effectiveness and forced them to "bab[y] him as unobtrusively as possible". For his leadership during the French campaign Rommel received both praise and criticism. Many, such as General Georg Stumme, who had previously commanded 7th Panzer Division, were impressed with the speed and success of Rommel's drive. Others were reserved or critical: Kluge, his commanding officer, argued that Rommel's decisions were impulsive and that he claimed too much credit, by falsifying diagrams or by not acknowledging contributions of other units, especially the Luftwaffe. Some pointed out that Rommel's division took the highest casualties in the campaign. Others point out that in exchange for 2,160 casualties and 42 tanks, it captured more than 100,000 prisoners and destroyed nearly two divisions' worth of enemy tanks (about 450 tanks), vehicles and guns. Rommel spoke German with a pronounced southern German or Swabian accent. He was not a part of the Prussian aristocracy that dominated the German high command, and as such was looked upon somewhat suspiciously by the Wehrmacht's traditional power structure. Rommel felt a commander should be physically more robust than the troops he led, and should always show them an example. He expected his subordinate commanders to do the same. Rommel was direct, unbending, tough in his manners, to superiors and subordinates alike, disobedient even to Hitler whenever he saw fit, although gentle and diplomatic to the lower ranks. Despite being publicity-friendly, he was also shy, introverted, clumsy and overly formal even to his closest aides, judging people only on their merits, although loyal and considerate to those who had proved reliability, and he displayed a surprisingly passionate and devoted side to a very small few (including Hitler) with whom he had dropped the seemingly impenetrable barriers. Rommel's relationship with the Italian High Command in North Africa was generally poor. Although he was nominally subordinate to the Italians, he enjoyed a certain degree of autonomy from them; since he was directing their troops in battle as well as his own, this was bound to cause hostility among Italian commanders. Conversely, as the Italian command had control over the supplies of the forces in Africa, they resupplied Italian units preferentially, which was a source of resentment for Rommel and his staff. Rommel's direct and abrasive manner did nothing to smooth these issues. While certainly much less proficient than Rommel in their leadership, aggression, tactical outlook and mobile warfare skills, Italian commanders were competent in logistics, strategy and artillery doctrine: their troops were ill-equipped but well-trained. As such, the Italian commanders were repeatedly at odds with Rommel over concerns with issues of supply. Field Marshal Kesselring was assigned Supreme Commander Mediterranean, at least in part to alleviate command problems between Rommel and the Italians. This effort resulted only in partial success, with Kesselring's own relationship with the Italians being unsteady and Kesselring claiming Rommel ignored him as readily as he ignored the Italians. Rommel often went directly to Hitler with his needs and concerns, taking advantage of the favouritism that the Führer displayed towards him and adding to the distrust that Kesselring and the German High Command already had of him. According to Scianna, opinion among the Italian military leaders was not unanimous. In general, Rommel was a target of criticism and a scapegoat for defeat rather than a glorified figure, with certain generals also trying to replace him as the heroic leader or hijack the Rommel myth for their own benefit. Nevertheless, he never became a hated figure, although the "abandonment myth", despite being repudiated by officers of the X Corps themselves, was long-lived. Many found Rommel's chaotic leadership and emotional character hard to work with, yet the Italians held him in higher regard than other German senior commanders, militarily and personally. Very different, however, was the perception of Rommel by Italian common soldiers and NCOs, who, like the German field troops, had the deepest trust and respect for him. Paolo Colacicchi, an officer in the Italian Tenth Army recalled that Rommel "became sort of a myth to the Italian soldiers". Rommel himself held a much more generous view about the Italian soldier than about their leadership, towards whom his disdain, deeply rooted in militarism, was not atypical, although unlike Kesselring he was incapable of concealing it. Unlike many of his superiors and subordinates who held racist views, he was usually "kindly disposed" to the Italians in general. James J. Sadkovich cites examples of Rommel abandoning his Italian units, refusing cooperation, rarely acknowledging their achievements and other improper behaviour towards his Italian allies, Giuseppe Mancinelli, who was liaison between German and Italian command, accused Rommel of blaming Italians for his own errors. Sadkovich names Rommel as arrogantly ethnocentric and disdainful towards Italians. Many authors describe Rommel as having a reputation of being a chivalrous, humane, and professional officer, and that he earned the respect of both his own troops and his enemies. Gerhard Schreiber quotes Rommel's orders, issued together with Kesselring: "Sentimentality concerning the Badoglio following gangs ("Banden" in the original, indicating a mob-like crowd) in the uniforms of the former ally is misplaced. Whoever fights against the German soldier has lost any right to be treated well and shall experience toughness reserved for the rabble which betrays friends. Every member of the German troop has to adopt this stance." Schreiber writes that this exceptionally harsh and, according to him, "hate fuelled" order brutalised the war and was clearly aimed at Italian soldiers, not just partisans. Dennis Showalter writes that "Rommel was not involved in Italy's partisan war, though the orders he issued prescribing death for Italian soldiers taken in arms and Italian civilians sheltering escaped British prisoners do not suggest he would have behaved significantly different from his Wehrmacht counterparts." According to Maurice Remy, orders issued by Hitler during Rommel's stay in a hospital resulted in massacres in the course of Operation Achse, disarming the Italian forces after the armistice with the Allies in 1943. Remy also states that Rommel treated his Italian opponents with his usual fairness, requiring that the prisoners should be accorded the same conditions as German civilians. Remy opines that an order in which Rommel, in contrast to Hitler's directives, called for no "sentimental scruples" against "Badoglio-dependent bandits in uniforms of the once brothers-in-arms" should not be taken out of context. Peter Lieb agrees that the order did not radicalise the war and that the disarmament in Rommel's area of responsibility happened without major bloodshed. Italian internees were sent to Germany for forced labour, but Rommel was unaware of this. Klaus Schmider comments that the writings of Lieb and others succeed in vindicating Rommel "both with regards to his likely complicity in the July plot as well as his repeated refusal to carry out illegal orders." Rommel withheld Hitler's Commando Order to execute captured commandos from his Army Group B, with his units reporting that they were treating commandos as regular POWs. It is likely that he had acted similarly in North Africa. Historian Szymon Datner argues that Rommel may have been simply trying to conceal the atrocities of Nazi Germany from the Allies. Remy states that although Rommel had heard rumours about massacres while fighting in Africa, his personality, combined with special circumstances, meant that he was not fully confronted with the reality of atrocities before 1944. When Rommel learned about the atrocities that SS Division Leibstandarte committed in Italy in September 1943, he allegedly forbade his son from joining the Waffen-SS. By the time of the Second World War, French colonial troops were portrayed as a symbol of French depravity in Nazi propaganda; Canadian historian Myron Echenberg writes that Rommel, just like Hitler, viewed black French soldiers with particular disdain. According to author Ward Rutherford, Rommel also held racist views towards British colonial troops from India; Rutherford in his The biography of Field Marshal Erwin Rommel writes: "Not even his most sycophantic apologists have been able to evade the conclusion, fully demonstrated by his later behavior, that Rommel was a racist who, for example, thought it desperately unfair that the British should employ 'black' – by which he meant Indian – troops against a white adversary." Vaughn Raspberry writes that Rommel and other officers considered it an insult to fight against black Africans because they considered black people to be members of "inferior races". Bruce Watson comments that whatever racism Rommel might have had in the beginning, it was washed away when he fought in the desert. When he saw that they were fighting well, he gave the members of the 4th Division of the Indian Army high praise. Rommel and the Germans acknowledge the Gurkhas' fighting ability, although their style leaned more towards ferocity. Once he witnessed German soldiers with throats cut by a khukri knife. Originally, he did not want Chandra Bose's Indian formation (composed of the Allied Indian soldiers), captured by his own troops, to work under his command. In Normandy though, when they had already become the Indische Freiwilligen Legion der Waffen SS, he visited them and praised them for their efforts (while they still suffered general disrespect within the Wehrmacht). A review on Rutherford's book by the Pakistan Army Journal says that the statement is one of many that Rutherford uses, which lack support in authority and analysis. Rommel saying that using the Indians was unfair should also be put in perspective, considering the disbandment of the battle-hardened 4th Division by the Allies. Rommel praised the colonial troops in the battle of France: "The (French) colonial troops fought with extraordinary determination. The anti-tank teams and tank crews performed with courage and caused serious losses." though that might be an example of generals honouring their opponents so that "their own victories appear the more impressive." Reuth comments that Rommel ensured that he and his command would act decently (shown by his treatment of the Free French prisoners who were considered partisans by Hitler, the Jews and the coloured men), while he was distancing himself from Hitler's racist war in the East and deluding himself into believing that Hitler was good, only the Party big shots were evil. The black South African soldiers recount that when they were held as POWs after they were captured by Rommel, they initially slept and queued for food away from the whites, until Rommel saw this and told them that brave soldiers should all queue together. Finding this strange coming from a man fighting for Hitler, they adopted this behaviour until they went back to the Union of South Africa, where they were separated again. There are reports that Rommel acknowledged the Maori soldiers' fighting skills, yet at the same time he complained about their methods which were unfair from the European perspective. When he asked the commander of the New Zealand 6th Infantry Brigade about his division's massacres of the wounded and POWs, the commander attributed these incidents to the Maoris in his unit. Hew Strachan notes that lapses in practising the warriors' code of war were usually attributed to ethnic groups which lived outside Europe with the implication that those ethnic groups which lived in Europe knew how to behave (although Strachan opines that such attributions were probably true). Nevertheless, according to the website of the 28th Maori Battalion, Rommel always treated them fairly and he also showed understanding with regard to war crimes. Some authors cite, among other cases, Rommel's naive reaction to events in Poland while he was there: he paid a visit to his wife's uncle, famous Polish priest and patriotic leader, Edmund Roszczynialski who was murdered within days, but Rommel never understood this and, at his wife's urgings, kept writing letter after letter to Himmler's adjutants asking them to keep track and take care of their relative. Knopp and Mosier agree that he was naive politically, citing his request for a Jewish Gauleiter in 1943. Despite this, Lieb finds it hard to believe that a man in Rommel's position could have known nothing about atrocities, while accepting that locally he was separated from the places where these atrocities occurred. Der Spiegel comments that Rommel was simply in denial about what happened around him. Alaric Searle points out that it was the early diplomatic successes and bloodless expansion that blinded Rommel to the true nature of his beloved Führer, whom he then naively continued to support. Scheck believes it may be forever unclear whether Rommel recognised the unprecedented depraved character of the regime. Historian Richard J. Evans has stated that German soldiers in Tunisia raped Jewish women, and the success of Rommel's forces in capturing or securing Allied, Italian and Vichy French territory in North Africa led to many Jews in these areas being killed by other German institutions as part of the Holocaust. Anti-Jewish and Anti-Arab violence erupted in North Africa when Rommel and Ettore Bastico regained territory there in February 1941 and then again in April 1942. While committed by Italian forces, Patrick Bernhard writes "the Germans were aware of Italian reprisals behind the front lines. Yet, perhaps surprisingly, they seem to have exercised little control over events. The German consul general in Tripoli consulted with Italian state and party officials about possible countermeasures against the natives, but this was the full extent of German involvement. Rommel did not directly intervene, though he advised the Italian authorities to do whatever was necessary to eliminate the danger of riots and espionage; for the German general, the rear areas were to be kept "quiet" at all costs. Thus, according to Bernhard, although he had no direct hand in the atrocities, Rommel made himself complicit in war crimes by failing to point out that international laws of war strictly prohibited certain forms of retaliation. By giving carte blanche to the Italians, Rommel implicitly condoned, and perhaps even encouraged, their war crimes". Gershom reports that the recommendation came from officers "speaking for Rommel", and comments, "Perhaps Rommel did not know or care about the specifics; perhaps his motivation was not hate but dispassionate efficiency. The distinctions would have escaped the men hanging from hooks." In his article Im Rücken Rommels. Kriegsverbrechen, koloniale Massengewalt und Judenverfolgung in Nordafrika, Bernhard writes that North African campaign was hardly "war without hate" as Rommel described it, and points out rapes of women, ill treatment and executions of captured POWs, as well as racially motivated murders of Arabs, Berbers and Jews, in addition to establishment of concentration camps. Bernhard again cites discussion among the German and Italian authorities about Rommel's position regarding countermeasures against local insurrection (according to them, Rommel wanted to eliminate the danger at all costs) to show that Rommel fundamentally approved of Italian policy in the matter. Bernhard opines that Rommel had informal power over the matter because his military success brought him influence on the Italian authorities. United States Holocaust Memorial Museum describes relationship between Rommel and the proposed Einsatzgruppen Egypt as "problematic". The Museum states that this unit was to be tasked with murdering Jewish population of North Africa, Palestine, and it was to be attached directly to Rommel's Afrika Korps. According to the museum Rauff met with Rommel's staff in 1942 as part of preparations for this plan. The Museum states that Rommel was certainly aware that planning was taking place, even if his reaction to it isn't recorded, and while the main proposed Einsatzgruppen were never set in action, smaller units did murder Jews in North Africa. On the other hand, Christopher Gabel remarks that Richards Evans seems to attempt to prove that Rommel was a war criminal by association but fails to produce evidence that he had actual or constructive knowledge about said crimes. Ben H. Shepherd comments that Rommel showed insight and restraint when dealing with the nomadic Arabs, the only civilians who occasionally intervened into the war and thus risked reprisals as a result. Shepherd cites a request by Rommel to the Italian High Command, in which he complained about excesses against the Arabic population and noted that reprisals without identifying the real culprits were never expedient. The documentary Rommel's War (Rommels Krieg), made by Caron and Müllner with advice from Sönke Neitzel, states that even though it is not clear whether Rommel knew about the crimes (in Africa) or not, "his military success made possible forced labor, torture and robbery. Rommel's war is always part of Hitler's war of worldviews, whether Rommel wanted it or not." More specifically, several German historians have revealed existence of plans to exterminate Jews in Egypt and Palestine, if Rommel had succeeded in his goal of invading the Middle East during 1942 by SS unit embedded to Afrika Korps. According to Mallmann and Cüppers, a post-war CIA report described Rommel as having met with Walther Rauff, who was responsible for the unit, and been disgusted after learning about the plan from him and as having sent him on his way; but they conclude that such a meeting is hardly possible as Rauff was sent to report to Rommel at Tobruk on 20 July and Rommel was then 500 km away conducting the First El Alamein. On 29 July, Rauff's unit was sent to Athens, expecting to enter Africa when Rommel crossed the Nile. However, in view of the Axis' deteriorating situation in Africa it returned to Germany in September. Historian Jean-Christoph Caron opines that there is no evidence that Rommel knew or would have supported Rauff's mission; he also believes Rommel bore no direct responsibility regarding the SS's looting of gold in Tunisia. Historian Haim Saadon, Director of the Center of Research on North African Jewry in WWII, goes further, stating that there was no extermination plan: Rauff's documents show that his foremost concern was helping the Wehrmacht to win, and he came up with the idea of forced labour camps in the process. By the time these labour camps were in operation, according to Ben H. Shepherd, Rommel had already been retreating and there is no proof of his contact with the Einsatzkommando. Haaretz comments that the CIA report is most likely correct regarding both the interaction between Rommel and Rauff and Rommel's objections to the plan: Rauff's assistant Theodor Saevecke, and declassified information from Rauff's file, both report the same story. Haaretz also remarks that Rommel's influence probably softened the Nazi authorities' attitude to the Jews and to the civilian population generally in North Africa. Rolf-Dieter Müller comments that the war in North Africa, while as bloody as any other war, differed considerably from the war of annihilation in eastern Europe, because it was limited to a narrow coastline and hardly affected the population. Showalter writes that: From the desert campaign’s beginning, both sides consciously sought to wage a "clean" war—war without hate, as Rommel put it in his reflections. Explanations include the absence of civilians and the relative absence of Nazis; the nature of the environment, which conveyed a "moral simplicity and transparency"; and the control of command on both sides by prewar professionals, producing a British tendency to depict war in the imagery of a game, and the corresponding German pattern of seeing it as a test of skill and a proof of virtue. The nature of the fighting as well diminished the last-ditch, close-quarter actions that are primary nurturers of mutual bitterness. A battalion overrun by tanks usually had its resistance broken so completely that nothing was to be gained by a broken-backed final stand. Joachim Käppner writes that while the conflict in North Africa was not as bloody as in Eastern Europe, the Afrika Korps committed some war crimes. Historian Martin Kitchen states that the reputation of the Afrika Korps was preserved by circumstances: The sparsely populated desert areas did not lend themselves to ethnic cleansing; the German forces never reached the large Jewish populations in Egypt and Palestine; and in the urban areas of Tunisia and Tripolitania the Italian government constrained the German efforts to discriminate against or eliminate Jews who were Italian citizens. Despite this, the North African Jews themselves believed that it was Rommel who prevented the "Final Solution" from being carried out against them when German might dominated North Africa from Egypt to Morocco. According to Curtis and Remy, 120,000 Jews lived in Algeria, 200,000 in Morocco, about 80,000 in Tunisia. Remy writes that this number was unchanged following the German invasion of Tunisia in 1942 while Curtis notes that 5000 of these Jews would be sent to forced labour camps. and 26,000 in Libya. Hein Klemann writes that the confiscations in the "foraging zone" of Afrika Korps threatened the survival chances of local civilians, just as plunder enacted by Wehrmacht in Soviet Union. In North Africa Rommel's troops laid down landmines, which in decades to come killed and maimed thousands of civilians. Since statistics started in 1980s, 3,300 people have lost their lives, and 7,500 maimed There are disputed whether the landmines in El Alamein, which constitute the most notable portion of landmines left over from World War II, were left by the Afrika Korps or the British Army led by Field Marshal Montgomery. Egypt has not joined the Mine Ban Treaty until this day. Rommel sharply protested the Jewish policies and other immoralities and was an opponent of the Gestapo He also refused to comply with Hitler's order to execute Jewish POWs. Bryan Mark Rigg writes: "The only place in the army where one might find a place of refuge was in the Deutsches Afrika-Korps (DAK) under the leadership of the 'Desert Fox,' Field Marshal Erwin Rommel. According to this study's files, his half-Jews were not as affected by the racial laws as most others serving on the European continent." He notes, though, that "Perhaps Rommel failed to enforce the order to discharge half-Jews because he was unaware of it". Captain Horst van Oppenfeld (a staff officer to Colonel Claus von Stauffenberg and a quarter-Jew) says that Rommel did not concern himself with the racial decrees and he had never experienced any trouble caused by his ancestry during his time in the DAK even if Rommel never personally interfered on his behalf. Another quarter-Jew, Fritz Bayerlein, became a famous general and Rommel's chief-of-staff, despite also being a bisexual, which made his situation even more precarious. Building the Atlantic Wall was officially the responsibility of the Organisation Todt, which was not under Rommel's command, but he enthusiastically joined the task, protesting slave labour and suggesting that they should recruit French civilians and pay them good wages. Despite this, French civilians and Italian prisoners of war held by the Germans were forced by officials under the Vichy government, the Todt Organization and the SS forces to work on building some of the defences Rommel requested, in appalling conditions according to historian Will Fowler. Although they got basic wages, the workers complained because it was too little and there was no heavy equipment. German troops worked almost round-the-clock under very harsh conditions, with Rommel's rewards being accordions. Rommel was one of the commanders who protested the Oradour-sur-Glane massacre. Rommel was famous in his lifetime, including among his adversaries. His tactical prowess and decency in the treatment of Allied prisoners earned him the respect of opponents including Claude Auchinleck, Archibald Wavell, George S. Patton, and Bernard Montgomery. Rommel's military reputation has been controversial. While nearly all military practitioners acknowledge Rommel's excellent tactical skills and personal bravery, some, such as U.S. major general and military historian David T. Zabecki of the United States Naval Institute, considers Rommel's performance as an operational level commander to be highly overrated and that other officers share this belief. General Klaus Naumann, who served as Chief of Staff of the Bundeswehr, agrees with the military historian Charles Messenger that Rommel had challenges at the operational level, and states that Rommel's violation of the unity of command principle, bypassing the chain of command in Africa, was unacceptable and contributed to the eventual operational and strategic failure in North Africa. The German biographer Wolf Heckmann describes Rommel as "the most overrated commander of an army in world history". Nevertheless, there is also a notable number of officers who admire his methods, like Norman Schwarzkopf who described Rommel as a genius at battles of movement saying "Look at Rommel. Look at North Africa, the Arab-Israeli wars, and all the rest of them. A war in the desert is a war of mobility and lethality. It's not a war where straight lines are drawn in the sand and [you] say, 'I will defend here or die." Ariel Sharon deemed the German military model used by Rommel to be superior to the British model used by Montgomery. His compatriot Moshe Dayan likewise considered Rommel a model and icon. Wesley Clark states that "Rommel's military reputation, though, has lived on, and still sets the standard for a style of daring, charismatic leadership to which most officers aspire." During the recent desert wars, Rommel's military theories and experiences attracted great interest from policy makers and military instructors. Chinese military leader Sun Li-jen had the laudatory nickname "Rommel of the East". Certain modern military historians, such as Larry T. Addington, Niall Barr, Douglas Porch and Robert Citino, are sceptical of Rommel as an operational, let alone strategic level commander. They point to Rommel's lack of appreciation for Germany's strategic situation, his misunderstanding of the relative importance of his theatre to the German High Command, his poor grasp of logistical realities, and, according to the historian Ian Beckett, his "penchant for glory hunting". Citino credits Rommel's limitations as an operational level commander as "materially contributing" to the eventual demise of the Axis forces in North Africa, while Addington focuses on the struggle over strategy, whereby Rommel's initial brilliant success resulted in "catastrophic effects" for Germany in North Africa. Porch highlights Rommel's "offensive mentality", symptomatic of the Wehrmacht commanders as a whole in the belief that the tactical and operational victories would lead to strategic success. Compounding the problem was the Wehrmacht's institutional tendency to discount logistics, industrial output and their opponents' capacity to learn from past mistakes. The historian Geoffrey P. Megargee points out Rommel's playing the German and Italian command structures against each other to his advantage. Rommel used the confused structure—the High command of the armed forces, the OKH (Supreme High Command of the Army) and the Comando Supremo (Italian Supreme Command)—to disregard orders that he disagreed with or to appeal to whatever authority he felt would be most sympathetic to his requests. Some historians take issue with Rommel's absence from Normandy on the day of the Allied invasion, 6 June 1944. He had left France on 5 June and was at home on the 6th celebrating his wife's birthday. (According to Rommel, he planned to proceed to see Hitler the next day to discuss the situation in Normandy). Zabecki calls his decision to leave the theatre in view of an imminent invasion "an incredible lapse of command responsibility". Lieb remarks that Rommel displayed real mental agility, but the lack of an energetic commander, together with other problems, caused the battle largely not to be conducted in his concept (which is the opposite of the German doctrine), although the result was still better than Geyr's plan. Lieb also opines that while his harshest critics (who mostly came from the General Staff) often said that Rommel was overrated or not suitable for higher commands, envy was a big factor here. T.L. McMahon argues that while Rommel no doubt possessed operational vision, he did not have the strategic resources to effect his operational choices while his forces provided the tactical ability to accomplish his goals, and the German staff and system of staff command were designed for commanders who led from the front, and in some cases he might have chosen the same options as Montgomery (a reputedly strategy-oriented commander) had he been put in the same conditions. According to Steven Zaloga, tactical flexibility was a great advantage of the German system, but in the final years of the war, Hitler and his cronies like Himmler and Goering had usurped more and more authority at the strategic level, leaving professionals like Rommel increasing constraints on their actions. Martin Blumenson considers Rommel a general with a compelling view of strategy and logistics, which was demonstrated through his many arguments with his superiors over such matters, although Blumenson also thinks that what distinguished Rommel was his boldness, his intuitive feel for the battlefield.(Upon which Schwarzkopf also comments "Rommel had a feel for the battlefield like no other man.") Joseph Forbes comments that: "The complex, conflict-filled interaction between Rommel and his superiors over logistics, objectives and priorities should not be used to detract from Rommel's reputation as a remarkable military leader", because Rommel was not given powers over logistics, and because if only generals who attain strategic-policy goals are great generals, such highly regarded commanders as Robert E. Lee, Hannibal, Charles XII would have to be excluded from that list. General Siegfried F. Storbeck, Deputy Inspector General of the Bundeswehr (1987–1991), remarks that, Rommel's leadership style and offensive thinking, although carrying inherent risks like losing the overview of the situation and creating overlapping of authority, have been proved effective, and have been analysed and incorporated in the training of officers by "us, our Western allies, the Warsaw Pact, and even the Israel Defense Forces". Maurice Remy defends his strategic decision regarding Malta as, although risky, the only logical choice. Rommel was among the few Axis commanders (the others being Isoroku Yamamoto and Reinhard Heydrich) who were targeted for assassination by Allied planners. Two attempts were made, the first being Operation Flipper in North Africa in 1941, and the second being Operation Gaff in Normandy in 1944. Research by Norman Ohler claims that Rommel's behaviours were heavily influenced by Pervitin which he reportedly took in heavy doses, to such an extent that Ohler refers to him as "the Crystal Fox" ("Kristallfuchs") – playing off the nickname "Desert Fox" famously given to him by the British. In France, Rommel ordered the execution of one French officer who refused three times to cooperate when being taken prisoner; there are disputes as to whether this execution was justified. Caddick-Adams comments that this would make Rommel a war criminal condemned by his own hand, and that other authors overlook this episode. Butler notes that the officer refused to surrender three times and thus died in a courageous but foolhardy way. French historian Petitfrère remarks that Rommel was in a hurry and had no time for useless palavers, although this act was still debatable. Telp remarks that, "he treated prisoners of war with consideration. On one occasion, he was forced to order the shooting of a French lieutenant-colonel for refusing to obey his captors." Scheck says, "Although there is no evidence incriminating Rommel himself, his unit did fight in areas where German massacres of black French prisoners of war were extremely common in June 1940." There are reports that during the fighting in France, Rommel's 7th Panzer Division committed atrocities against surrendering French troops and captured prisoners of war. The atrocities, according to Martin S. Alexander, included the murder of 50 surrendering officers and men at Quesnoy and the nearby Airaines. According to Richardot, on 7 June, the commanding French officer Charles N'Tchoréré and his company surrendered to the 7th Panzer Division. He was then executed by the 25th Infantry Regiment (the 7th Panzer Division did not have a 25th Infantry Regiment). Journalist Alain Aka states simply that he was executed by one of Rommel's soldiers and his body was driven over by tank. Erwan Bergot reports that he was killed by the SS. Historian John Morrow states he was shot in the neck by a Panzer officer, without mentioning the unit of the perpetrators of this crime. The website of the National Federation of Volunteer Servicemen (F.N.C.V., France) states that N'Tchoréré was pushed against the wall and, despite protests from his comrades and newly liberated German prisoners, was shot by the SS. Elements of the division are considered by Scheck to have been "likely" responsible for the murder of POWs in Hangest-sur-Somme, while Scheck reports that they were too far away to have been involved in the massacres at Airaines and nearby villages. Scheck says that the German units fighting there came from the 46th and 2nd Infantry Division, and possibly from the 6th and 27th Infantry Division as well. Scheck also writes that there were no SS units in the area. Morrow, citing Scheck, says that the 7th Panzer Division carried out "cleansing operations". French historian Dominique Lormier counts the number of victims of the 7th Panzer Division in Airaines at 109, mostly French-African soldiers from Senegal. Showalter writes: In fact, the garrison of Le Quesnoy, most of them Senegalese, took heavy toll of the German infantry in house-to-house fighting. Unlike other occasions in 1940, when Germans and Africans met, there was no deliberate massacre of survivors. Nevertheless, the riflemen took few prisoners, and the delay imposed by the tirailleurs forced the Panzers to advance unsupported until Rommel was ordered to halt for fear of coming under attack by Stukas. Claus Telp comments that Airaines was not in the sector of the 7th, but at Hangest and Martainville, elements of the 7th might have shot some prisoners and used British Colonel Broomhall as a human shield (although Telp is of the opinion that it was unlikely that Rommel approved of, or even knew about, these two incidents). Historian David Stone notes that acts of shooting surrendered prisoners were carried out by Rommel's 7th Panzer Division and observes contradictory statements in Rommel's account of the events; Rommel initially wrote that "any enemy troops were wiped out or forced to withdraw" but also added that "many prisoners taken were hopelessly drunk." Stone attributes the massacres of soldiers from the 53ème Regiment d'Infanterie Coloniale (N'Tchoréré's unit) on 7 June to the 5th Infantry Division. Historian Daniel Butler agrees that it was possible that the massacre at Le Quesnoy happened given the existence of Nazis, such as Hanke, in Rommel's division, while stating that in comparison with other German units, few sources regarding such actions of the men of the 7th Panzer exist. Butler believes that "it's almost impossible to imagine" Rommel authorising or countenancing such actions. He also writes that Some accusers have twisted a remark in Rommel's own account of the action in the village of Le Quesnoy as proof that he at least tacitly condoned the executions—'any enemy troops were either wiped out or forced to withdraw'—but the words themselves as well as the context of the passage hardly support the contention. Giordana Terracina writes that: "On April 3, the Italians recaptured Benghazi and a few months later the Afrika Korps led by Rommel was sent to Libya and began the deportation of the Jews of Cyrenaica in the concentration camp of Giado and other smaller towns in Tripolitania. This measure was accompanied by shooting, also in Benghazi, of some Jews guilty of having welcomed the British troops, on their arrival, treating them as liberators." Gershom states that Italian authorities were responsible for bringing Jews into their concentration camps, which were "not built to exterminate its inmates", yet as the water and food supply was meager, were not built to keep humans alive either. Also according to Gershom, the German consul in Tripoli knew about the process and trucks used to transport supply to Rommel were sometimes used to transport Jews, despite all problems the German forces were having. The Jerusalem Post's review of Gershom Gorenberg's War of shadows writes that: "The Italians were far more brutal with civilians, including Libyan Jews, than Rommel’s Afrika Korps, which by all accounts abided by the laws of war. But nobody worried that the Italians who sent Jews to concentration camps in Libya, would invade British-held Egypt, let alone Mandatory Palestine." According to German historian Wolfgang Proske [de], Rommel forbade his soldiers from buying anything from the Jewish population of Tripoli, used Jewish slave labour and commanded Jews to clear out minefields by walking on them ahead of his forces. According to Proske, some of the Libyan Jews were eventually sent to concentration camps. Historians Christian Schweizer and Peter Lieb note that: "Over the last few years, even though the social science teacher Wolfgang Proske has sought to participate in the discussion [on Rommel] with very strong opinions, his biased submissions are not scientifically received." The Heidenheimer Zeitung notes that Proske was the publisher of his main work Täter, Helfer, Trittbrettfahrer – NS-Belastete von der Ostalb, after failing to have it published by another publisher. According to historian Michael Wolffsohn, during the Africa campaign, preparations for committing genocide against the North African Jews were in full swing and a thousand of them were transported to East European concentration camps. At the same time, he recommends the Bundeswehr to keep the names and traditions associated with Rommel (although Wolffsohn opines that focus should be put on the politically thoughtful soldier he became at the end of his life, rather than the swashbuckler and the humane rogue). Robert Satloff writes in his book Among the Righteous: Lost Stories from the Holocaust's Long Reach into Arab Lands that as the German and Italian forces retreated across Libya towards Tunisia, the Jewish population became victim upon which they released their anger and frustration. According to Satloff Afrika Korps soldiers plundered Jewish property all along the Libyan coast. This violence and persecution only came to an end with the arrival of General Montgomery in Tripoli on 23 January 1943. According to Maurice Remy, although there were antisemitic individuals in the Afrika Korps, actual cases of abuse are not known, even against the Jewish soldiers of the Eighth Army. Remy quotes Isaac Levy, the Senior Jewish Chaplain of the Eighth Army, as saying that he had never seen "any sign or hint that the soldiers [of the Afrika Korps] are antisemitic.". The Telegraph comments: "Accounts suggest that it was not Field Marshal Erwin Rommel but the ruthless SS colonel Walter Rauff who stripped Tunisian Jews of their wealth." Commenting on Rommel's conquest of Tunisia, Marvin Perry writes that: "The bridgehead Rommel established in Tunisia enabled the SS to herd Jews into slave labor camps." Der Spiegel writes that: "The SS had established a network of labor camps in Tunisia. More than 2,500 Tunisian Jews died in six months of German rule, and the regular army was also involved in executions." Caron writes on Der Spiegel that the camps were organised in early December 1942 by Nehring, the commander in Tunisia, and Rauff, while Rommel was retreating. As commander of the German Afrika Korps, Nehring would continue to use Tunisian forced labour. According to Caddick-Adams, no Waffen-SS served under Rommel in Africa at any time and most of the activities of Rauff's detachment happened after Rommel's departure. Shepherd notes that during this time Rommel was retreating and there is no evidence that he had contact with the Einsatzkommando. Addressing the call of some authors to contextualise Rommel's actions in Italy and North Africa, Wolfgang Mährle notes that while it is undeniable that Rommel played the role of a Generalfeldmarschall in a criminal war, this only illustrates in a limited way his personal attitude and the actions resulted from that. According to several historians, allegations and stories that associate Rommel and the Afrika Korps with the harassing and plundering of Jewish gold and property in Tunisia are usually known under the name "Rommel's treasure" or "Rommel's gold". Michael FitzGerald comments that the treasure should be named more accurately as Rauff's gold, as Rommel had nothing to do with its acquisition or removal. Jean-Christoph Caron comments that the treasure legend has a real core and that Jewish property was looted by the SS in Tunisia and later might have been hidden or sunken around the port city of Corsica, where Rauff was stationed in 1943. The person who gave birth to the full-blown legend was the SS soldier Walter Kirner, who presented a false map to the French authorities. Caron and Jörg Müllner, his co-author of the ZDF documentary Rommel's treasure (Rommels Schatz) tell Die Welt that "Rommel had nothing to do with the treasure, but his name is assocỉated with everything that happened in the war in Africa." Rick Atkinson criticises Rommel for gaining a looted stamp collection (a bribe from Sepp Dietrich) and a villa taken from Jews. Lucas, Matthews and Remy though describe the contemptuous and angry reaction of Rommel towards Dietrich's act and the lootings and other brutal behaviours of the SS that he had discovered in Italy. Claudia Hecht also explains that although the Stuttgart and Ulm authorities did arrange for the Rommel family to use a villa whose Jewish owners had been forced out two years earlier, for a brief period after their own house had been destroyed by Allied bombing, ownership of it was never transferred to them. Butler notes that Rommel was one of the few who refused large estates and gifts of cash Hitler gave to his generals. At the beginning, although Hitler and Goebbels took particular notice of Rommel, the Nazi elites had no intent to create one major war symbol (partly out of fear that he would offset Hitler), generating huge propaganda campaigns for not only Rommel but also Gerd von Rundstedt, Walther von Brauchitsch, Eduard Dietl, Sepp Dietrich (the latter two were party members and also strongly supported by Hitler), etc. Nevertheless, a multitude of factors—including Rommel's unusual charisma, his talents both in military matters and public relations,, the efforts of Goebbels's propaganda machine, and the Allies' participation in mythologising his life (either for political benefits, sympathy for someone who evoked a romantic archetype, or genuine admiration for his actions)—gradually contributed to Rommel's fame. Spiegel wrote, "Even back then his fame outshone that of all other commanders." Rommel's victories in France were featured in the German press and in the February 1941 film Sieg im Westen (Victory in the West), in which Rommel personally helped direct a segment re-enacting the crossing of the Somme River.According to Scheck, although there is no evidence of Rommel committing crimes, during the shooting of the movie, African prisoners of war, were forced to take part in its making, and forced to carry out humiliating acts. Stills from the re-enactment are found in "Rommel Collection"; it was filmed by Hans Ertl, assigned to this task by Dr. Kurt Hesse, a personal friend of Rommel, who worked for Wehrmacht Propaganda Section V Rommel's victories in 1941 were played up by the Nazi propaganda, even though his successes in North Africa were achieved in arguably one of Germany's least strategically important theatres of World War II. In November 1941, Reich Minister of Propaganda Joseph Goebbels wrote about "the urgent need" to have Rommel "elevated to a kind of popular hero." Rommel, with his innate abilities as a military commander and love of the spotlight, was a perfect fit for the role Goebbels designed for him. In North Africa, Rommel received help in cultivating his image from Alfred Ingemar Berndt, a senior official at the Reich Propaganda Ministry who had volunteered for military service. Seconded by Goebbels, Berndt was assigned to Rommel's staff and became one of his closest aides. Berndt often acted as liaison between Rommel, the Propaganda Ministry, and the Führer Headquarters. He directed Rommel's photo shoots and filed radio dispatches describing the battles. In the spring of 1941, Rommel's name began to appear in the British media. In the autumn of 1941 and early winter of 1941/1942, he was mentioned in the British press almost daily. Toward the end of the year, the Reich propaganda machine also used Rommel's successes in Africa as a diversion from the Wehrmacht's challenging situation in the Soviet Union with the stall of Operation Barbarossa. The American press soon began to take notice of Rommel as well, following the country's entry into the war on 11 December 1941, writing that "The British (...) admire him because he beat them and were surprised to have beaten in turn such a capable general." General Auchinleck distributed a directive to his commanders seeking to dispel the notion that Rommel was a "superman". Rommel, no matter how hard the situation was, made a deliberate effort at always spending some time with soldiers and patients, his own and POWs alike, which contributed greatly to his reputation of not only being a great commander but also "a decent chap" among the troops. The attention of the Western and especially the British press thrilled Goebbels, who wrote in his diary in early 1942: "Rommel continues to be the recognized darling of even the enemies' news agencies." The Field Marshal was pleased by the media attention, although he knew the downsides of having a reputation. Hitler took note of the British propaganda as well, commenting in the summer of 1942 that Britain's leaders must have hoped "to be able to explain their defeat to their own nation more easily by focusing on Rommel". The Field Marshal was the German commander most frequently covered in the German media, and the only one to be given a press conference, which took place in October 1942. The press conference was moderated by Goebbels and was attended by both domestic and foreign media. Rommel declared: "Today we (...) have the gates of Egypt in hand, and with the intent to act!" Keeping the focus on Rommel distracted the German public from Wehrmacht losses elsewhere as the tide of the war began to turn. He became a symbol that was used to reinforce the German public's faith in an ultimate Axis victory. In the wake of the successful British offensive in November 1942 and other military reverses, the Propaganda Ministry directed the media to emphasise Rommel's invincibility. The charade was maintained until the spring of 1943, even as the German situation in Africa became increasingly precarious. To ensure that the inevitable defeat in Africa would not be associated with Rommel's name, Goebbels had the Army High Command announce in May 1943 that Rommel was on a two-month leave for health reasons. Instead, the campaign was presented by Berndt, who resumed his role in the Propaganda Ministry, as a ruse to tie down the British Empire while Germany was turning Europe into an impenetrable fortress with Rommel at the helm of this success. After the radio programme ran in May 1943, Rommel sent Berndt a case of cigars as a sign of his gratitude. Although Rommel then entered a period without a significant command, he remained a household name in Germany, synonymous with the aura of invincibility. Hitler then made Rommel part of his defensive strategy for Fortress Europe (Festung Europa) by sending him to the West to inspect fortifications along the Atlantic Wall. Goebbels supported the decision, noting in his diary that Rommel was "undoubtedly the suitable man" for the task. The propaganda minister expected the move to reassure the German public and at the same time to have a negative impact on the Allied forces' morale. In France, a Wehrmacht propaganda company frequently accompanied Rommel on his inspection trips to document his work for both domestic and foreign audiences. In May 1944 the German newsreels reported on Rommel's speech at a Wehrmacht conference, where he stated his conviction that "every single German soldier will make his contribution against the Anglo-American spirit that it deserves for its criminal and bestial air war campaign against our homeland." The speech led to an upswing in morale and sustained confidence in Rommel. When Rommel was seriously wounded on 17 July 1944, the Propaganda Ministry undertook efforts to conceal the injury so as not to undermine domestic morale. Despite those, the news leaked to the British press. To counteract the rumours of a serious injury and even death, Rommel was required to appear at 1 August press conference. On 3 August, the German press published an official report that Rommel had been injured in a car accident. Rommel noted in his diary his dismay at this twisting of the truth, belatedly realising how much the Reich propaganda was using him for its own ends. Rommel was interested in propaganda beyond the promotion of his own image. In 1944, after visiting Rommel in France and reading his proposals on counteracting Allied propaganda, Alfred-Ingemar Berndt remarked: "He is also interested in this propaganda business and wants to develop it by all means. He has even thought and brought out practical suggestions for each program and subject." Rommel saw the propaganda and education values in his and his nation's deeds (He also did value justice itself; according to Admiral Ruge's diary, Rommel told Ruge: "Justice is the indispensable foundation of a nation. Unfortunately, the higher-ups are not clean. The slaughterings are grave sins.") The key to the successful creating of an image, according to Rommel, was leading by example: The men tend to feel no kind of contact with a commander who, they know, is sitting somewhere in headquarters. What they want is what might be termed a physical contact with him. In moments of panic, fatigue, or disorganization, or when something out of the ordinary has to be demanded from them, the personal example of the commander works wonders, especially if he has had the wit to create some sort of legend around himself. He urged Axis authorities to treat the Arab with the utmost respect to prevent uprisings behind the front. He protested the use of propaganda at the cost of explicit military benefits though, criticising Hitler's headquarters for being unable to tell the German people and the world that El Alamein had been lost and preventing the evacuation of the German forces in Northern Africa in the process. Ruge suggests that his chief treated his own fame as a kind of weapon. In 1943, he surprised Hitler by proposing that a Jew should be made into a Gauleiter to prove to the world that Germany was innocent of accusations that Rommel had heard from the enemy's propaganda regarding the mistreatment of Jews. Hitler replied, "Dear Rommel, you understand nothing about my thinking at all." Rommel was not a member of the Nazi Party. Rommel and Hitler had a close and genuine, if complicated, personal relationship. Rommel, as other Wehrmacht officers, welcomed the Nazi rise to power. Numerous historians state that Rommel was one of Hitler's favourite generals and that his close relationship with the dictator benefited both his inter-war and war-time career. Robert Citino describes Rommel as "not apolitical" and writes that he owed his career to Hitler, to whom Rommel's attitude was "worshipful", with Messenger agreeing that Rommel owed his tank command, his hero status and other promotions to Hitler's interference and support. Kesselring described Rommel's own power over Hitler as "hypnotic". In 1944, Rommel himself told Ruge and his wife that Hitler had a kind of irresistible magnetic aura ("Magnetismus") and was always seemingly in an intoxicated condition. Maurice Remy identifies that the point at which their relationship became a personal one was 1939, when Rommel proudly announced to his friend Kurt Hesse that he had "sort of forced Hitler to go with me (to the Hradschin Castle in Prague, in an open top car, without another bodyguard), under my personal protection ... He had entrusted himself to me and would never forget me for my excellent advice." The close relationship between Rommel and Hitler continued following the Western campaign; after Rommel sent to him a specially prepared diary on the 7th Division, he received a letter of thanks from the dictator. (According to Speer, he would normally send extremely unclear reports which annoyed Hitler greatly.) According to Maurice Remy, the relationship, which Remy calls "a dream marriage", showed the first crack only in 1942, and later gradually turned into, in the words of German writer Ernst Jünger (in contact with Rommel in Normandy), "Haßliebe" (a love-hate relationship). Ruge's diary and Rommel's letters to his wife show his mood fluctuating wildly regarding Hitler: while he showed disgust towards the atrocities and disappointment towards the situation, he was overjoyed to welcome a visit from Hitler, only to return to depression the next day when faced with reality. Hitler displayed the same emotions. Amid growing doubts and differences, he would remain eager for Rommel's calls (they had almost daily, hour-long, highly animated conversations, with the preferred topic being technical innovations): he once almost grabbed the telephone out of Linge's hand. But, according to Linge, seeing Rommel's disobedience Hitler also realised his mistake in building up Rommel, whom not only the Afrika Korps but also the German people in general now considered the German God. Hitler tried to fix the dysfunctional relationship many times without results, with Rommel calling his attempts "Sunlamp Treatment", although later he said that "Once I have loved the Führer, and I still do." Remy and Der Spiegel remark that the statement was very much genuine, while Watson notes that Rommel believed he deserved to die for his treasonable plan. Rommel was an ambitious man who took advantage of his proximity to Hitler and willingly accepted the propaganda campaigns designed for him by Goebbels. On one hand, he wanted personal promotion and the realisation of his ideals. On the other hand, being elevated by the traditional system that gave preferential treatment to aristocratic officers would be betrayal of his aspiration "to remain a man of the troops". In 1918, Rommel refused an invitation to a prestigious officer training course, and with it, the chance to be promoted to general. Additionally, he had no inclination towards the political route, preferring to remain a soldier ("Nur-Soldat"). He was thus attracted by the Common Man theme which promised to level German society, the glorification of the national community, and the idea of a soldier of common background who served the Fatherland with talent and got rewarded by another common man who embodied the will of the German people. While he had much indignation towards Germany's contemporary class problem, this self-association with the Common Man went along well with his desire to simulate the knights of the past, who also led from the front. Rommel seemed to enjoy the idea of peace, as shown by his words to his wife in August 1939: "You can trust me, we have taken part in one World War, but as long as our generation live, there will not be a second", as well as his letter sent to her the night before the Invasion of Poland, in which he expressed (in Maurice Remy's phrase) "boundless optimism": "I still believe the atmosphere will not become more bellicose." Butler remarks that Rommel was center in his politics, leaning a little to the left in his attitude. Messenger argues that Rommel's attitude towards Hitler changed only after the Allied invasion of Normandy, when Rommel came to realise that the war could not be won, while Maurice Remy suggests that Rommel never truly broke away from the relationship with Hitler but praises him for "always [having] the courage to oppose him whenever his conscience required so". The historian Peter Lieb states that it was not clear whether the threat of defeat was the only reason Rommel wanted to switch sides. The relationship seemed to go significantly downhill after a conversation in July 1943, in which Hitler told Rommel that if they did not win the war, the Germans could rot. Rommel even began to think that it was lucky that his Afrika Korps was now safe as POWs and could escape Hitler's Wagnerian ending. Die Welt comments that Hitler chose Rommel as his favourite because he was apolitical, and that the combination of his military expertise and circumstances allowed Rommel to remain clean. Rommel's political inclinations were a controversial matter even among the contemporary Nazi elites. Rommel himself, while showing support to some facets of the Nazi ideology and enjoying the propaganda machine that the Nazis had built around him, was enraged by the Nazi media's effort to portray him as an early Party member and son of a mason, forcing them to correct this misinformation. The Nazi elites were not comfortable with the idea of a national icon who did not wholeheartedly support the regime. Hitler and Goebbels, his main supporters, tended to defend him. When Rommel was being considered for appointment as Commander-in-Chief of the Army in the summer of 1942, Goebbels wrote in his diary that Rommel "is ideologically sound, is not just sympathetic to the National Socialists. He is a National Socialist; he is a troop leader with a gift for improvisation, personally courageous and extraordinarily inventive. These are the kinds of soldiers we need." Despite this, they gradually saw that his grasp of political realities and his views could be very different from theirs. Hitler knew, though, that Rommel's optimistic and combative character was indispensable for his war efforts. When Rommel lost faith in the final victory and Hitler's leadership, Hitler and Goebbels tried to find an alternative in Manstein to remedy the fighting will and "political direction" of other generals but did not succeed. Meanwhile, officials who did not like Rommel, such as Bormann and Schirach, whispered to each other that he was not a Nazi at all. Rommel's relationship to the Nazi elites, other than Hitler and Goebbels, was mostly hostile, although even powerful people like Bormann and Himmler had to tread carefully around Rommel. Himmler, who played a decisive role in Rommel's death, tried to blame Keitel and Jodl for the deed. And in fact the deed was initiated by them. They deeply resented Rommel's meteoric rise and had long feared that he would become the Commander-in-Chief. (Hitler also played innocent by trying to erect a monument for the national hero, on 7 March 1945) Franz Halder, after concocting several schemes to rein in Rommel through people like Paulus and Gause to no avail (even willing to undermine German operations and strategy in the process for the sole purpose of embarrassing him), concluded that Rommel was a madman with whom no one dared to cross swords because of "his brutal methods and his backing from the highest levels". (Rommel imposed a high number of courts martial, but according to Westphal, he never signed the final order. Owen Connelly comments that he could afford easy discipline because of his charisma). Rommel for his part was highly critical of Himmler, Halder, the High Command and particularly Goering who Rommel at one point called his "bitterest enemy". Hitler realised that Rommel attracted the elites' negative emotions to himself, in the same way he generated optimism in the common people. Depending on the case, Hitler manipulated or exacerbated the situation in order to benefit himself, although he originally had no intent of pushing Rommel to the point of destruction. (Even when informed of Rommel's involvement in the plot, hurt and vengeful, Hitler at first wanted to retire Rommel, and eventually offered him a last-minute chance to explain himself and refute the claims, which Rommel apparently did not take advantage of.) Ultimately Rommel's enemies worked together to bring him down. Maurice Remy concludes that, unwillingly and probably without ever realising it, Rommel was part of a murderous regime, although he never actually grasped the core of Nazism. Peter Lieb sees Rommel as a person who could not be put into a single drawer, although problematic by modern moral standards, and suggests people should personally decide for themselves whether Rommel should remain a role model or not. He was a Nazi general in some aspects, considering his support for the leader cult (Führerkult) and the Volksgemeinschaft, but he was not an antisemite, nor a war criminal, nor a radical ideological fighter. Historian Cornelia Hecht remarks "It is really hard to know who the man behind the myth was," noting that in numerous letters he wrote to his wife during their almost 30-year marriage, he commented little on political issues as well as his personal life as a husband and a father. According to some revisionist authors, an assessment of Rommel's role in history has been hampered by views of Rommel that were formed, at least in part, for political reasons, creating what these historians have called the "Rommel myth". The interpretation considered by some historians to be a myth is the depiction of the Field Marshal as an apolitical, brilliant commander and a victim of Nazi Germany who participated in the 20 July plot against Adolf Hitler. There are a notable number of authors who refer to "Rommel Myth" or "Rommel Legend" in a neutral or positive manner though. The seeds of the myth can be found first in Rommel's drive for success as a young officer in World War I and then in his popular 1937 book Infantry Attacks, which was written in a style that diverged from the German military literature of the time and became a best-seller. The myth then took shape during the opening years of World War II, as a component of Nazi propaganda to praise the Wehrmacht and instill optimism in the German public, with Rommel's willing participation. When Rommel came to North Africa, it was picked up and disseminated in the West by the British press as the Allies sought to explain their continued inability to defeat the Axis forces in North Africa. The British military and political figures contributed to the heroic image of the man as Rommel resumed offensive operations in January 1942 against the British forces weakened by redeployments to the Far East. During parliamentary debate following the fall of Tobruk, Churchill described Rommel as an "extraordinary bold and clever opponent" and a "great field commander". According to Der Spiegel following the war's end, West Germany yearned for father figures who were needed to replace the former ones who had been unmasked as criminals. Rommel was chosen because he embodied the decent soldier, cunning yet fair-minded, and if guilty by association, not so guilty that he became unreliable, and additionally, former comrades reported that he was close to the Resistance. While everyone else was disgraced, his star became brighter than ever, and he made the historically unprecedented leap over the threshold between eras: from Hitler's favourite general to the young republic's hero. Cornelia Hecht notes that despite the change of times, Rommel has become the symbol of different regimes and concepts, which is paradoxical, whoever the man he really was. At the same time, the Western Allies, and particularly the British, depicted Rommel as the "good German". His reputation for conducting a clean war was used in the interest of the West German rearmament and reconciliation between the former enemies—Britain and the United States on one side and the new Federal Republic of Germany on the other. When Rommel's alleged involvement in the plot to kill Hitler became known after the war, his stature was enhanced in the eyes of his former adversaries. Rommel was often cited in Western sources as a patriotic German willing to stand up to Hitler. Churchill wrote about him in 1950: "[Rommel] (...) deserves our respect because, although a loyal German soldier, he came to hate Hitler and all his works and took part in the conspiracy of 1944 to rescue Germany by displacing the maniac and tyrant." While at Cadet School in 1911, Rommel met and became engaged to 17-year-old Lucia (Lucie) Maria Mollin (1894–1971). While stationed in Weingarten in 1913, Rommel developed a relationship with Walburga Stemmer, which produced a daughter, Gertrud, born 8 December 1913. Because of elitism in the officer corps, Stemmer's working-class background made her unsuitable as an officer's wife, and Rommel felt honour-bound to uphold his previous commitment to Mollin. With Mollin's cooperation, he accepted financial responsibility for the child. Rommel and Mollin were married in November 1916 in Danzig. Rommel's marriage was a happy one, and he wrote his wife at least one letter every day while he was in the field. After the end of the First World War, the couple settled initially in Stuttgart, and Stemmer and her child lived with them. Gertrud was referred to as Rommel's niece, a fiction that went unquestioned because of the enormous number of women widowed during the war. Walburga died suddenly in October 1928, and Gertrud remained a member of the household until Rommel's death in 1944. The incident with Walburga seemed to affect Rommel for the rest of his life: he would always keep women distant. A son, Manfred Rommel, was born on 24 December 1928, later served as Mayor of Stuttgart from 1974 to 1996. The German Army's largest base, the Field Marshal Rommel Barracks, Augustdorf, is named in his honour; at the dedication in 1961 his widow Lucie and son Manfred Rommel were guests of honour. The Rommel Barracks, Dornstadt, was also named for him in 1965. A third base named for him, the Field Marshal Rommel Barracks, Osterode, closed in 2004. The German destroyer Rommel was named for him in 1969 and christened by his widow; the ship was decommissioned in 1998. The Rommel Memorial was erected in Heidenheim in 1961. In 2020, a sculpture of a landmine victim was placed next to the Rommel Memorial in Heidenheim. The city mayor Bernhard Ilg comments that, regarding "the great son of Heidenheim", "there are many opinions". Heidenheim eventually dedicated the Memorial towards a stand against war, militarism and extremism, stating that when the memorial was erected in 1961, statements were added that now are not compatible with modern knowledge about Rommel. The Deutsche Welle notes that the 17 million mines the British, Italian, and German armies left continue to claim lives to this day. In Aalen, after a discussion on renaming a street named after him, a new place of commemoration was created, where stelae with information on the lives of Rommel and three opponents of the regime (Eugen Bolz, Friedrich Schwarz and Karl Mikeller) stand together (Rommel's stele is dark blue and rusty red while the others are light-coloured). The History Association of Aalen, together with an independent commission of historians from Düsseldorf, welcomes the keeping of the street's name and notes that Rommel was neither war criminal nor resistance fighter, but perpetrator and victim at the same time – he willingly served as figurehead for the regime, then lately recognised his mistake and paid for that with his life. An education program named "Erwin Rommel and Aalen" for school children in Aalen is also established. In 2021, the Student Council of the Friedrich-Alexander-University Erlangen-Nürnberg (FAU) decided to change the name of their Süd-Campus (South Campus, Erlangen) into Rommel-Campus, emphasising that the city of Erlangen stands behind the name and the university needs to do the same. The university's branch of the Education and Science Workers' Union (GEW) describes the decision as problematic considering Rommel's history of supporting the Nazi regime militarily and propagandistically. Numerous streets in Germany, especially in Rommel's home state of Baden-Württemberg, are named in his honour, including the street near where his last home was located. The Rommel Museum opened in 1989 in the Villa Lindenhof in Herrlingen. The museum now operates under the name Museum Lebenslinien (Lifelines Museum), which presents the lives of Rommel and other notable residents of Herrlingen, including the poet Gertrud Kantorowicz (whose collection is presented together with the Rommel Archive inside a building on a road named after Rommel), the educators Anna Essinger and Hugo Rosenthal. There is also a Rommel Museum in Mersa Matruh in Egypt which opened in 1977, and which is located in one of Rommel's former headquarters; various other localities and establishments in Mersa Matruh, including Rommel Beach, are also named for Rommel. The reason for the naming is that he respected the Bedouins' traditions and the sanctity of their homes (he always kept his troops at least 2 kilometres from their houses) and refused to poison the wells against the Allies, fearing doing so would harm the population. In Italy, the annual marathon tour "Rommel Trail", which is sponsored by the Protezione Civile and the autonomous region of Friuli Venezia Giulia through its tourism agency, celebrates Rommel and the Battle of Caporetto. The naming and sponsoring (at that time by the center-left PD) was criticised by the politician Giuseppe Civati in 2017. Informational notes Citations Bibliography
[ { "paragraph_id": 0, "text": "Johannes Erwin Eugen Rommel (pronounced [ˈɛʁviːn ˈʁɔməl] ; 15 November 1891 – 14 October 1944) was a German Generalfeldmarschall (field marshal) during World War II. Popularly known as the Desert Fox (German: Wüstenfuchs, pronounced [ˈvyːstn̩ˌfʊks] ), he served in the Wehrmacht (armed forces) of Nazi Germany, as well as serving in the Reichswehr of the Weimar Republic, and the army of Imperial Germany. Rommel was injured multiple times in both world wars.", "title": "" }, { "paragraph_id": 1, "text": "Rommel was a highly decorated officer in World War I and was awarded the Pour le Mérite for his actions on the Italian Front. In 1937, he published his classic book on military tactics, Infantry Attacks, drawing on his experiences in that war.", "title": "" }, { "paragraph_id": 2, "text": "In World War II, he commanded the 7th Panzer Division during the 1940 invasion of France. His leadership of German and Italian forces in the North African campaign established his reputation as one of the ablest tank commanders of the war, and earned him the nickname der Wüstenfuchs, \"the Desert Fox\". Among his British adversaries he had a reputation for chivalry, and his phrase \"war without hate\" has been uncritically used to describe the North African campaign. A number of historians have since rejected the phrase as a myth and uncovered numerous examples of German war crimes and abuses towards enemy soldiers and native populations in Africa during the conflict. Other historians note that there is no clear evidence Rommel was involved or aware of these crimes, with some pointing out that the war in the desert, as fought by Rommel and his opponents, still came as close to a clean fight as there was in World War II. He later commanded the German forces opposing the Allied cross-channel invasion of Normandy in June 1944.", "title": "" }, { "paragraph_id": 3, "text": "With the Nazis gaining power in Germany, Rommel gradually accepted the new regime. Historians have given different accounts of the specific period and his motivations. He was a supporter of Adolf Hitler, at least until near the end of the war, if not necessarily sympathetic to the party and the paramilitary forces associated with it. In 1944, Rommel was implicated in the 20 July plot to assassinate Hitler. Because of Rommel's status as a national hero, Hitler wanted to eliminate him quietly instead of having him immediately executed, as many other plotters were. Rommel was given a choice between suicide, in return for assurances that his reputation would remain intact and that his family would not be persecuted following his death, or facing a trial that would result in his disgrace and execution; he chose the former and took a cyanide pill. Rommel was given a state funeral, and it was announced that he had succumbed to his injuries from the strafing of his staff car in Normandy.", "title": "" }, { "paragraph_id": 4, "text": "Rommel became a larger-than-life figure in both Allied and Nazi propaganda, and in postwar popular culture. Numerous authors portray him as an apolitical, brilliant commander and a victim of Nazi Germany, although this assessment is contested by other authors as the Rommel myth. Rommel's reputation for conducting a clean war was used in the interest of the West German rearmament and reconciliation between the former enemies – the United Kingdom and the United States on one side and the new Federal Republic of Germany on the other. Several of Rommel's former subordinates, notably his chief of staff Hans Speidel, played key roles in German rearmament and integration into NATO in the postwar era. The German Army's largest military base, the Field Marshal Rommel Barracks, Augustdorf, and a third ship of Lütjens-class destroyer of the German Navy are both named in his honour. His son Manfred Rommel was the longtime mayor of Stuttgart, Germany and namesake of Stuttgart Airport.", "title": "" }, { "paragraph_id": 5, "text": "Rommel was born on 15 November 1891, in Heidenheim, 45 kilometres (28 mi) from Ulm, in the Kingdom of Württemberg, Southern Germany, then part of the German Empire. He was the third of five children to Erwin Rommel Senior (1860–1913) and his wife Helene von Luz, whose father, Karl von Luz, headed the local government council. As a young man, Rommel's father had been an artillery lieutenant. Rommel had one older sister who was an art teacher and his favourite sibling, one older brother named Manfred who died in infancy, and two younger brothers, of whom one became a successful dentist and the other an opera singer.", "title": "Early life and career" }, { "paragraph_id": 6, "text": "At age 18, Rommel joined the Württemberg Infantry Regiment No. 124 in Weingarten as a Fähnrich (ensign), in 1910, studying at the Officer Cadet School in Danzig. He graduated in November 1911 and was commissioned as a lieutenant in January 1912 and was assigned to the 124th Infantry in Weingarten. He was posted to Ulm in March 1914 to the 49th Field Artillery Regiment, XIII (Royal Württemberg) Corps, as a battery commander. He returned to the 124th when war was declared. While at Cadet School, Rommel met his future wife, 17-year-old Lucia (Lucie) Maria Mollin (1894–1971), of Italian and Polish descent.", "title": "Early life and career" }, { "paragraph_id": 7, "text": "During World War I, Rommel fought in France as well as in the Romanian (notably at the Second Battle of the Jiu Valley) and Italian campaigns. He successfully employed the tactics of penetrating enemy lines with heavy covering fire coupled with rapid advances, as well as moving forward rapidly to a flanking position to arrive at the rear of hostile positions, to achieve tactical surprise. His first combat experience was on 22 August 1914 as a platoon commander near Verdun, when – catching a French garrison unprepared – Rommel and three men opened fire on them without ordering the rest of his platoon forward. The armies continued to skirmish in open engagements throughout September, as the static trench warfare typical of the First World War was still in the future. For his actions in September 1914 and January 1915, Rommel was awarded the Iron Cross, Second Class. Rommel was promoted to Oberleutnant (first lieutenant) and transferred to the newly created Royal Wurttemberg Mountain Battalion of the Alpenkorps in September 1915, as a company commander. In November 1916 in Danzig, Rommel and Lucia married.", "title": "World War I" }, { "paragraph_id": 8, "text": "In August 1917, his unit was involved in the battle for Mount Cosna, a heavily fortified objective on the border between Hungary and Romania, which they took after two weeks of difficult uphill fighting. The Mountain Battalion was next assigned to the Isonzo front, in a mountainous area in Italy. The offensive, known as the Battle of Caporetto, began on 24 October 1917. Rommel's battalion, consisting of three rifle companies and a machine gun unit, was part of an attempt to take enemy positions on three mountains: Kolovrat, Matajur, and Stol. In two and a half days, from 25 to 27 October, Rommel and his 150 men captured 81 guns and 9,000 men (including 150 officers), at a loss of six dead and 30 wounded. Rommel achieved this remarkable success by taking advantage of the terrain to outflank the Italian forces, attacking from unexpected directions or behind enemy lines, and taking the initiative to attack when he had orders to the contrary. In one instance, the Italian forces, taken by surprise and believing that their lines had collapsed, surrendered after a brief firefight. In this battle, Rommel helped pioneer infiltration tactics, a new form of manoeuvre warfare just being adopted by German armies, and later by foreign armies, and described by some as Blitzkrieg without tanks, though he played no role in the early adoption of Blitzkrieg in World War II. Acting as advance guard in the capture of Longarone on 9 November, Rommel again decided to attack with a much smaller force. Convinced that they were surrounded by an entire German division, the 1st Italian Infantry Division – 10,000 men – surrendered to Rommel. For this and his actions at Matajur, he received the order of Pour le Mérite.", "title": "World War I" }, { "paragraph_id": 9, "text": "In January 1918, Rommel was promoted to Hauptmann (captain) and assigned to a staff position in the 64th Army Corps, where he served for the remainder of the war.", "title": "World War I" }, { "paragraph_id": 10, "text": "Rommel remained with the 124th Regiment until October 1920. The regiment was involved in quelling riots and civil disturbances that were occurring throughout Germany at this time. Wherever possible, Rommel avoided the use of force in these confrontations. In 1919, he was briefly sent to Friedrichshafen on Lake Constance, where he restored order by \"sheer force of personality\" in the 32nd Internal Security Company, which was composed of rebellious and pro-communist sailors. He decided against storming the nearby city of Lindau, which had been taken by revolutionary communists. Instead, Rommel negotiated with the city council and managed to return it to the legitimate government through diplomatic means. This was followed by his defence of Schwäbisch Gmünd, again bloodless. He was then posted to the Ruhr, where a red army was responsible for fomenting unrest. Historian Raffael Scheck praises Rommel as a coolheaded and moderate mind, exceptional amid the many takeovers of revolutionary cities by regular and irregular units and the associated massive violence.", "title": "Between the wars" }, { "paragraph_id": 11, "text": "According to Reuth, this period gave Rommel the indelible impression that \"Everyone in this Republic was fighting each other,\" along with the direct experience of people who attempted to convert Germany into a socialist republic on Soviet lines. There are similarities with Hitler's experiences: like Rommel, Hitler had known the solidarity of trench warfare and then had participated in the Reichswehr's suppression of the First and Second Bavarian Soviet Republics. The need for national unity thus became a decisive legacy of the first World War. Brighton notes that while both believed in the Stab-in-the-back myth, Rommel was able to succeed using peaceful methods because he saw the problem in empty stomachs rather than in Judeo-Bolshevism – which right-wing soldiers such as Hitler blamed for the chaos in Germany.", "title": "Between the wars" }, { "paragraph_id": 12, "text": "On 1 October 1920, Rommel was appointed to a company command with the 13th Infantry Regiment in Stuttgart, a post he held for the next nine years. He was then assigned to an instruction position at the Dresden Infantry School from 1929 to 1933; during this time, in April 1932, he was promoted to major. While at Dresden, he wrote a manual on infantry training, published in 1934. In October 1933, he was promoted to Oberstleutnant (lieutenant colonel) and given his next command, the 3rd Jäger Battalion, 17th Infantry Regiment, stationed at Goslar. Here he first met Hitler, who inspected his troops on 30 September 1934. In September 1935, Rommel was moved to the War Academy in Potsdam as an instructor, serving for the next three years. His book Infanterie greift an (Infantry Attacks), a description of his wartime experiences along with his analysis, was published in 1937. It became a best-seller, which, according to Scheck, later \"enormously influenced\" many armies of the world; Adolf Hitler was one of many who owned a copy.", "title": "Between the wars" }, { "paragraph_id": 13, "text": "Hearing of Rommel's reputation as an outstanding military instructor, in February 1937 Hitler assigned him as the War Ministry liaison officer to the Hitler Youth in charge of military training. Here he clashed with Baldur von Schirach, the Hitler Youth leader, over the training that the boys should receive. Trying to fulfill a mission assigned to him by the Ministry of War, Rommel had twice proposed a plan that would have effectively subordinated Hitler Youth to the army, removing it from NSDAP control. That went against Schirach's express wishes. Schirach appealed directly to Hitler; consequently, Rommel was quietly removed from the project in 1938. He had been promoted to Oberst (colonel), on 1 August 1937, and in 1938, following the Anschluss, he was appointed commandant of the Theresian Military Academy at Wiener Neustadt.", "title": "Between the wars" }, { "paragraph_id": 14, "text": "In October 1938, Hitler specially requested that Rommel be seconded to command the Führerbegleitbatallion (his escort battalion). This unit accompanied Hitler whenever he travelled outside of Germany. During this period, Rommel indulged his interest in engineering and mechanics by learning about the inner workings and maintenance of internal combustion engines and heavy machine guns. He memorised logarithm tables in his spare time and enjoyed skiing and other outdoor sports. Ian F. Beckett writes that by 1938, Rommel drifted towards uncritical acceptance of Nazi regime, quoting Rommel's letter to his wife in which he stated \"The German Wehrmacht is the sword of the new German world view\" as a reaction to speech by Hitler.", "title": "Between the wars" }, { "paragraph_id": 15, "text": "During his visit to Switzerland in 1938, Rommel reported that Swiss soldiers who he met showed \"remarkable understanding of our Jewish problem\". Butler comments that he did share the view (popular in Germany and many European countries during that time) that as a people, the Jews were loyal to themselves rather than the nations which they lived in. Despite this fact, other pieces of evidence show that he considered the Nazi racial ideologies rubbish. Searle comments that Rommel knew the official stand of the regime, but in this case, the phrase was ambiguous and there is no evidence after or before this event that he ever sympathised with the antisemitism of the Nazi movement. Rommel's son Manfred Rommel stated in documentary The Real Rommel, published in 2001 by Channel 4 that his father would \"look the other way\" when faced with anti-Jewish violence on the streets. According to the documentary, Rommel also requested proof of \"Aryan descent\" from the Italian boyfriend of his illegitimate daughter Gertrud. According to Remy, during the time Rommel was posted in Goslar, he repeatedly clashed with the SA whose members terrorised the Jews and dissident Goslar citizens. After the Röhm Purge, he mistakenly believed that the worst was over, although restrictions on Jewish businesses were still being imposed and agitation against their community continued. According to Remy, Manfred Rommel recounts that his father knew about and privately disagreed with the government's antisemitism, but by this time, he had not actively campaigned on behalf of the Jews. However, Uri Avnery notes that even when he was a low-ranking officer, he protected the Jews who lived in his district. Manfred Rommel tells the Stuttgarter Nachrichten that their family lived in isolated military lands but knew about the discrimination against the Jews which was occurring on the outside. They could not foresee the enormity of the impending atrocities, about which they only knew much later.", "title": "Between the wars" }, { "paragraph_id": 16, "text": "At one point, Rommel wrote to his wife that Hitler had a \"magnetic, maybe hypnotic, strength\" that had its origin in Hitler's belief that he \"was called upon by God\" and Hitler sometimes \"spoke from the depth of his being [...] like a prophet\".", "title": "Between the wars" }, { "paragraph_id": 17, "text": "Rommel was promoted to Generalmajor on 23 August 1939 and assigned as commander of the Führerbegleitbatallion, tasked with guarding Hitler and his field headquarters during the invasion of Poland, which began on 1 September. According to Remy, Rommel's private letters at this time show that he did not understand Hitler's true nature and intentions, as he quickly went from predicting a swift peaceful settlement of tensions to approving Hitler's reaction (\"bombs will be retaliated with bombs\") to the Gleiwitz incident (a false flag operation staged by Hitler and used as a pretext for the invasion). Hitler took a personal interest in the campaign, often moving close to the front in the Führersonderzug (headquarters train). Rommel attended Hitler's daily war briefings and accompanied him everywhere, making use of the opportunity to observe first-hand the use of tanks and other motorised units. On 26 September Rommel returned to Berlin to set up a new headquarters for his unit in the Reich Chancellery. Rommel briefly returned to occupied Warsaw on 5 October in order to prepare for the German victory parade. In a letter to his wife he claimed that the occupation by Nazi Germany was \"probably welcomed with relief\" by the inhabitants of the ruined city and that they were \"rescued\".", "title": "World War II" }, { "paragraph_id": 18, "text": "Following the invasion of Poland, Rommel began lobbying for command of one of Germany's panzer divisions, of which there were then only ten. Rommel's successes in World War I were based on surprise and manoeuvre, two elements for which the new panzer units were ideally suited. Rommel received a promotion to a general's rank from Hitler ahead of more senior officers. Rommel obtained the command he aspired to, despite having been earlier turned down by the army's personnel office, which had offered him command of a mountain division instead. According to Peter Caddick-Adams, he was backed by Hitler, the influential Fourteenth Army commander Wilhelm List (a fellow Württemberger middle-class \"military outsider\") and likely Heinz Guderian, the commander of XIX Army Corps, as well.", "title": "World War II" }, { "paragraph_id": 19, "text": "Going against military protocol, this promotion added to Rommel's growing reputation as one of Hitler's favoured commanders, although his later outstanding leadership in France quelled complaints about his self-promotion and political scheming. The 7th Panzer Division had recently been converted to an armoured division consisting of 218 tanks in three battalions (thus, one tank regiment, instead of the two assigned to a standard panzer division), with two rifle regiments, a motorcycle battalion, an engineer battalion, and an anti-tank battalion. Upon taking command on 10 February 1940, Rommel quickly set his unit to practising the manoeuvres they would need in the upcoming campaign.", "title": "World War II" }, { "paragraph_id": 20, "text": "The invasion began on 10 May 1940. By the third day Rommel and the advance elements of his division, together with a detachment of the 5th Panzer Division, had reached the Meuse, where they found the bridges had already been destroyed (Guderian and Georg-Hans Reinhardt reached the river on the same day). Rommel was active in the forward areas, directing the efforts to make a crossing, which were initially unsuccessful because of suppressive fire by the French on the other side of the river. Rommel brought up tanks and flak units to provide counter-fire and had nearby houses set on fire to create a smokescreen. He sent infantry across in rubber boats, appropriated the bridging tackle of the 5th Panzer Division, personally grabbed a light machine gun to fight off a French counterattack supported by tanks, and went into the water himself, encouraging the sappers and helping lash together the pontoons. By 16 May Rommel reached Avesnes, and contravening orders, he pressed on to Cateau. That night, the French II Army Corps was shattered and on 17 May, Rommel's forces took 10,000 prisoners, losing 36 men in the process. He was surprised to find out only his vanguard had followed his tempestuous surge. The High Command and Hitler had been extremely nervous about his disappearance, although they awarded him the Knight's Cross. Rommel's (and Guderian's) successes and the new possibilities offered by the new tank arm were welcomed by a small number of generals, but worried and paralysed the rest.", "title": "World War II" }, { "paragraph_id": 21, "text": "On 20 May, Rommel reached Arras. General Hermann Hoth received orders that the town should be bypassed and its British garrison thus isolated. He ordered the 5th Panzer Division to move to the west and the 7th Panzer Division to the east, flanked by the SS Division Totenkopf. The following day, the British launched a counterattack in the Battle of Arras. It failed and the British withdrew.", "title": "World War II" }, { "paragraph_id": 22, "text": "On 24 May, Generaloberst (Colonel General) Gerd von Rundstedt and Generaloberst Günther von Kluge issued a halt order, which Hitler approved. The reason for this decision is still a matter of debate. The halt order was lifted on 26 May. 7th Panzer continued its advance, reaching Lille on 27 May. The Siege of Lille continued until 31 May, when the French garrison of 40,000 men surrendered. Rommel was summoned to Berlin to meet with Hitler. He was the only divisional commander present at the planning session for Fall Rot (Case Red), the second phase of the invasion of France. By this time the Dunkirk evacuation was complete; over 338,000 Allied troops had been evacuated across the Channel, though they had to leave behind all their heavy equipment and vehicles.", "title": "World War II" }, { "paragraph_id": 23, "text": "Rommel, resuming his advance on 5 June, drove for the River Seine to secure the bridges near Rouen. Advancing 100 kilometres (60 mi) in two days, the division reached Rouen to find it defended by three French tanks which managed to destroy a number of German tanks before being taken out. The German force, enraged by this resistance, forbade fire brigades access to the burning district of the old Norman capital, and as a result most of the historic quarter was reduced to ashes. According to David Fraser, Rommel instructed the German artillery to bombard the city as a \"fire demonstration\". According to one witness report the smoke from burning Rouen was intense enough that it reached Paris. Daniel Allen Butler states that the bridges to the city were already destroyed. After the fall of the city, both black civilians and colonial troops were summarily executed on 9 June by unknown German units. The number of black civilians and prisoners killed is estimated at around 100. According to Butler and Showalter, Rouen fell to the 5th Panzer Division, while Rommel advanced from the Seine towards the Channel. On 10 June, Rommel reached the coast near Dieppe, sending Hoth the message \"Bin an der Küste\" (\"Am on the coast\"). On 17 June, 7th Panzer was ordered to advance on Cherbourg, where additional British evacuations were under way. The division advanced 240 km (150 mi) in 24 hours, and after two days of shelling, the French garrison surrendered on 19 June. The speed and surprise that it was consistently able to achieve, to the point at which both the enemy and the Oberkommando des Heeres (OKH; German \"High Command of the Army\") at times lost track of its whereabouts, earned the 7th Panzers the nickname Gespensterdivision (\"ghost division\").", "title": "World War II" }, { "paragraph_id": 24, "text": "After the armistice with the French was signed on 22 June, the division was placed in reserve, being sent first to the Somme and then to Bordeaux to re-equip and prepare for Unternehmen Seelöwe (Operation Sea Lion), the planned invasion of Britain. This invasion was later cancelled, as Germany was not able to acquire the air superiority needed for a successful outcome, while the Kriegsmarine was massively outnumbered by the Royal Navy.", "title": "World War II" }, { "paragraph_id": 25, "text": "On 6 February 1941, Rommel was appointed commander of the new Afrika Korps (Deutsches Afrika Korps; DAK), consisting of the 5th Light Division (later renamed 21st Panzer Division) and of the 15th Panzer Division. He was promoted to Generalleutnant three days later and flew to Tripoli on 12 February. The DAK had been sent to Libya in Operation Sonnenblume to support Italian troops who had been roundly defeated by British Commonwealth forces in Operation Compass. His efforts in the Western Desert Campaign earned Rommel the nickname the \"Desert Fox\" from journalists on both sides of the war. Allied troops in Africa were commanded by General Archibald Wavell, Commander-in-Chief, Middle East Command.", "title": "World War II" }, { "paragraph_id": 26, "text": "Rommel and his troops were technically subordinate to Italian commander-in-chief General Italo Gariboldi. Disagreeing with the orders of the Oberkommando der Wehrmacht (OKW, German armed forces high command) to assume a defensive posture along the front line at Sirte, Rommel resorted to subterfuge and insubordination to take the war to the British. According to Remy, the General Staff tried to slow him down but Hitler encouraged him to advance—an expression of the conflict that had existed between Hitler and the army leadership since the invasion of Poland. He decided to launch a limited offensive on 24 March with the 5th Light Division, supported by two Italian divisions. This thrust was not anticipated by the British, who had Ultra intelligence showing that Rommel had orders to remain on the defensive until at least May, when the 15th Panzer Division were due to arrive.", "title": "World War II" }, { "paragraph_id": 27, "text": "The British Western Desert Force had meanwhile been weakened by the transfer in mid-February of three divisions for the Battle of Greece. They fell back to Mersa El Brega and started constructing defensive works. After a day of fierce fighting on 31 March, the Germans captured Mersa El Brega. Splitting his force into three groups, Rommel resumed the advance on 3 April. Benghazi fell that night as the British pulled out of the city. Gariboldi, who had ordered Rommel to stay in Mersa El Brega, was furious. Rommel was equally forceful in his response, telling Gariboldi, \"One cannot permit unique opportunities to slip by for the sake of trifles.\" A signal arrived from General Franz Halder reminding Rommel that he was to halt in Mersa El Brega. Knowing Gariboldi could not speak German, Rommel told him the message gave him complete freedom of action. Gariboldi backed down. Throughout the campaign, fuel supply was problematic, as no petrol was available locally; it had to be brought from Europe by tanker and then carried by road to where it was needed. Food and fresh water were also in short supply, and it was difficult to move tanks and other equipment off-road through the sand. Cyrenaica was captured by 8 April, except for the port city of Tobruk, which was besieged on 11 April.", "title": "World War II" }, { "paragraph_id": 28, "text": "The siege of Tobruk was not technically a siege, as the defenders were still able to move supplies and reinforcements into the city via the port. Rommel knew that by capturing the port he could greatly reduce the length of his supply lines and increase his overall port capacity, which was insufficient even for day-to-day operations and only half that needed for offensive operations. The city, which had been heavily fortified by the Italians during their 30-year occupation, was garrisoned by 36,000 Commonwealth troops, commanded by Australian Lieutenant General Leslie Morshead. Hoping to catch the defenders off-guard, Rommel launched a failed attack on 14 April.", "title": "World War II" }, { "paragraph_id": 29, "text": "Rommel requested reinforcements, but the OKW, then completing preparations for Operation Barbarossa, refused. General Friedrich Paulus, head of the Operations Branch of the OKH, arrived on 25 April to review the situation. He was present for a second failed attack on the city on 30 April. On 4 May, Paulus ordered that no further attempts should be made to take Tobruk via a direct assault. Following a failed counter-attack in Operation Brevity in May, Wavell launched Operation Battleaxe on 15 June; this attack was also defeated. The defeat resulted in Churchill replacing Wavell with General Claude Auchinleck as theatre commander.", "title": "World War II" }, { "paragraph_id": 30, "text": "In August, Rommel was appointed commander of the newly created Panzer Army Africa, with Fritz Bayerlein as his chief of staff. The Afrika Korps, comprising the 15th Panzer Division and the 5th Light Division, now reinforced and redesignated 21st Panzer Division, was put under command of Generalleutnant Ludwig Crüwell. In addition to the Afrika Korps, Rommel's Panzer Group had the 90th Light Division and four Italian divisions, three infantry divisions investing Tobruk, and one holding Bardia. The two Italian armoured divisions, formed into the Italian XX Motorized Corps under the command of General Gastone Gambara, were under Italian control. Two months later Hitler decided he must have German officers in better control of the Mediterranean theatre, and appointed Field Marshal Albert Kesselring as Commander in Chief, South. Kesselring was ordered to get control of the air and sea between Africa and Italy.", "title": "World War II" }, { "paragraph_id": 31, "text": "Following his success in Battleaxe, Rommel returned his attention to the capture of Tobruk. He made preparations for a new offensive, to be launched between 15 and 20 November. Meanwhile, Auchinleck reorganised Allied forces and strengthened them to two corps, XXX and XIII, which formed the British Eighth Army. It was placed under the command of Alan Cunningham. Auchinleck launched Operation Crusader, a major offensive to relieve Tobruk, on 18 November 1941. Rommel reluctantly decided on 20 November to call off his planned attack on Tobruk.", "title": "World War II" }, { "paragraph_id": 32, "text": "In four days of heavy fighting, the Eighth Army lost 530 tanks and Rommel only 100. Wanting to exploit the British halt and their apparent disorganisation, on 24 November Rommel counterattacked near the Egyptian border in an operation that became known as the \"dash to the wire\". Cunningham asked Auchinleck for permission to withdraw into Egypt, but Auchinleck refused, and soon replaced Cunningham as commander of Eighth Army with Major General Neil Ritchie. The German counterattack stalled as it outran its supplies and met stiffening resistance, and was criticised by the German High Command and some of Rommel's staff officers.", "title": "World War II" }, { "paragraph_id": 33, "text": "While Rommel drove into Egypt, the remaining Commonwealth forces east of Tobruk threatened the weak Axis lines there. Unable to reach Rommel for several days, Rommel's Chief of Staff, Siegfried Westphal, ordered the 21st Panzer Division withdrawn to support the siege of Tobruk. On 27 November, the British attack on Tobruk linked up with the defenders, and Rommel, having suffered losses that could not easily be replaced, had to concentrate on regrouping the divisions that had attacked into Egypt. By 7 December, Rommel fell back to a defensive line at Gazala, just west of Tobruk, all the while under heavy attack from the Desert Air Force. The Allies kept up the pressure, and Rommel was forced to retreat all the way back to the starting positions he had held in March, reaching El Agheila in December 1941. The British had retaken almost all of Cyrenaica, but Rommel's retreat dramatically shortened his supply lines.", "title": "World War II" }, { "paragraph_id": 34, "text": "On 5 January 1942, the Afrika Korps received 55 tanks and new supplies and Rommel started planning a counterattack, which he launched on 21 January. Caught by surprise, the Allies lost over 110 tanks and other heavy equipment. The Axis forces retook Benghazi on 29 January and Timimi on 3 February, with the Allies pulling back to a defensive line just before the Tobruk area south of the coastal town of Gazala. Between December 1941 and June 1942, Rommel had excellent information about the disposition and intentions of the Commonwealth forces. Bonner Fellers, US military attaché in Egypt, was sending detailed reports to the US State Department using a compromised code.", "title": "World War II" }, { "paragraph_id": 35, "text": "Following Kesselring's successes in creating local air superiority around the British naval and air bases at Malta in April 1942, an increased flow of supplies reached the Axis forces in Africa. With his forces strengthened, Rommel contemplated a major offensive operation for the end of May. He knew the British were planning offensive operations as well, and he hoped to pre-empt them. Early in the afternoon of 26 May 1942, Rommel attacked first and the Battle of Gazala commenced. Under the cover of darkness, the bulk of Rommel's motorised and armoured forces drove south to skirt the left flank of the British, coming up behind them and attacking to the north the following morning.", "title": "World War II" }, { "paragraph_id": 36, "text": "On 30 May, Rommel resumed the offensive, and on 1 June, Rommel accepted the surrender of some 3,000 Commonwealth soldiers. On 6 June, Rommel's forces assaulted the Free French strongpoint in the Battle of Bir Hakeim, but the defenders continued to thwart the attack until finally evacuating on 10 June. Rommel then shifted his attack north; threatened with being completely cut off, the British began a retreat eastward toward Egypt on 14 June, the so-called \"Gazala Gallop\".", "title": "World War II" }, { "paragraph_id": 37, "text": "The assault on Tobruk proper began at dawn on 20 June, and the British surrendered at dawn the following day. Rommel's forces captured 32,000 Commonwealth troops, the port, and huge quantities of supplies. Only at the fall of Singapore, earlier that year, had more British Commonwealth troops been captured at one time. On 22 June, Hitler promoted Rommel to Generalfeldmarschall for this victory. Following his success at Gazala and Tobruk, Rommel wanted to seize the moment and not allow 8th Army a chance to regroup. He strongly argued that the Panzerarmee should advance into Egypt and drive on to Alexandria and the Suez Canal, as this would place almost all the Mediterranean coastline in Axis hands and, according to Rommel, potentially lead to the capture from the south of the oil fields in the Caucasus and Middle East.", "title": "World War II" }, { "paragraph_id": 38, "text": "Rommel's success at Tobruk worked against him, as Hitler no longer felt it was necessary to proceed with Operation Herkules, the proposed attack on Malta. Auchinleck relieved Ritchie of command of the Eighth Army on 25 June, and temporarily took command himself. Rommel knew that delay would only benefit the British, who continued to receive supplies at a faster rate than Rommel could hope to achieve. He pressed an attack on the heavily fortified town of Mersa Matruh, which Auchinleck had designated as the fall-back position, surrounding it on 28 June. The fortress fell to the Germans on 29 June. In addition to stockpiles of fuel and other supplies, the British abandoned hundreds of tanks and trucks. Those that were functional were put into service by the Panzerwaffe.", "title": "World War II" }, { "paragraph_id": 39, "text": "Rommel continued his pursuit of the Eighth Army, which had fallen back to heavily prepared defensive positions at El Alamein. This region is a natural choke point, where the Qattara Depression creates a relatively short line to defend that could not be outflanked to the south because of the steep escarpment. During this time Germans prepared numerous propaganda postcards and leaflets for Egyptian and Syrian population urging them to \"chase English out of the cities\", warning them about \"Jewish peril\" and with one leaflet printed in 296,000 copies and aimed at Syria stating among others", "title": "World War II" }, { "paragraph_id": 40, "text": "Because Marshal Rommel, at the head of the brave Axis troops, is already rattling the last gates of England's power! Arabs! Help your friends achieve their goal: abolishing the English-Jewish-American tyranny!", "title": "World War II" }, { "paragraph_id": 41, "text": "On 1 July, the First Battle of El Alamein began. Rommel had around 100 available tanks. The Allies were able to achieve local air superiority, with heavy bombers attacking the 15th and 21st Panzers, who had also been delayed by a sandstorm. The 90th Light Division veered off course and were pinned down by South African artillery fire. Rommel continued to attempt to advance for two more days, but repeated sorties by the Desert Air Force meant he could make no progress. On 3 July, he wrote in his diary that his strength had \"faded away\". Attacks by 21st Panzer on 13 and 14 July were repulsed, and an Australian attack on 16–17 July was held off with difficulty. Throughout the first half of July, Auchinleck concentrated attacks on the Italian 60th Infantry Division Sabratha at Tel el Eisa. The ridge was captured by the 26th Australian Brigade on 16 July. Both sides suffered similar losses throughout the month, but the Axis supply situation remained less favourable. Rommel realised that the tide was turning. A break in the action took place at the end of July as both sides rested and regrouped.", "title": "World War II" }, { "paragraph_id": 42, "text": "Preparing for a renewed drive, the British replaced Auchinleck with General Harold Alexander on 8 August. Bernard Montgomery was made the new commander of Eighth Army that same day. The Eighth Army had initially been assigned to General William Gott, but he was killed when his plane was shot down on 7 August. Rommel knew that a British convoy carrying over 100,000 tons of supplies was due to arrive in September. He decided to launch an attack at the end of August with the 15th and 21st Panzer Division, 90th Light Division, and the Italian XX Motorized Corps in a drive through the southern flank of the El Alamein lines. Expecting an attack sooner rather than later, Montgomery fortified the Alam el Halfa ridge with the 44th Division, and positioned the 7th Armoured Division about 25 kilometres (15 mi) to the south.", "title": "World War II" }, { "paragraph_id": 43, "text": "The Battle of Alam el Halfa was launched on 30 August. The terrain left Rommel with no choice but to follow a similar tactic as he had at previous battles: the bulk of the forces attempted to sweep around from the south while secondary attacks were launched on the remainder of the front. It took much longer than anticipated to get through the minefields in the southern sector, and the tanks got bogged down in unexpected patches of quicksand (Montgomery had arranged for Rommel to acquire a falsified map of the terrain). Under heavy fire from British artillery and aircraft, and in the face of well prepared positions that Rommel could not hope to outflank for lack of fuel, the attack stalled. By 2 September, Rommel realised the battle was unwinnable, and decided to withdraw.", "title": "World War II" }, { "paragraph_id": 44, "text": "On the night of 3 September, the 2nd New Zealand Division and 7th Armoured Division positioned to the north engaged in an assault, but they were repelled in a fierce rearguard action by the 90th Light Division. Montgomery called off further action to preserve his strength and allow for further desert training for his forces. In the attack, Rommel had suffered 2,940 casualties and lost 50 tanks, a similar number of guns, and 400 lorries, vital for supplies and movement. The British losses, except tank losses of 68, were much less, further adding to the numerical inferiority of Panzer Army Africa. The Desert Air Force inflicted the highest proportions of damage on Rommel's forces. He now realised the war in Africa could not be won. Physically exhausted and suffering from a liver infection and low blood pressure, Rommel flew home to Germany to recover his health. General Georg Stumme was left in command in Rommel's absence.", "title": "World War II" }, { "paragraph_id": 45, "text": "Improved decoding by British intelligence (see Ultra) meant that the Allies had advance knowledge of virtually every Mediterranean convoy, and only 30 per cent of shipments were getting through. In addition, Mussolini diverted supplies intended for the front to his garrison at Tripoli and refused to release any additional troops to Rommel. The increasing Allied air superiority and lack of fuel meant Rommel was forced to take a more defensive posture than he would have liked for the second Battle of El Alamein. The German defences to the west of the town included a minefield eight kilometres (five miles) deep with the main defensive line – itself several thousand yards deep – to its west. This, Rommel hoped, would allow his infantry to hold the line at any point until motorised and armoured units in reserve could move up and counterattack any Allied breaches. The British offensive began on 23 October. Stumme, in command in Rommel's absence, died of an apparent heart attack while examining the front on 24 October, and Rommel was ordered to return from his medical leave, arriving on the 25th. Montgomery's intention was to clear a narrow path through the minefield at the northern part of the defences, at the area called Kidney Ridge, with a feint to the south. By the end of 25 October, the 15th Panzer, the defenders in this sector, had only 31 serviceable tanks remaining of their initial force of 119. Rommel brought the 21st Panzer and Ariete Divisions north on 26 October, to bolster the sector. On 28 October, Montgomery shifted his focus to the coast, ordering his 1st and 10th Armoured Divisions to attempt to swing around and cut off Rommel's line of retreat. Meanwhile, Rommel concentrated his attack on the Allied salient at Kidney Ridge, inflicting heavy losses. However, Rommel had only 150 operational tanks remaining, and Montgomery had 800, many of them Shermans.", "title": "World War II" }, { "paragraph_id": 46, "text": "Montgomery, seeing his armoured brigades losing tanks at an alarming rate, stopped major attacks until the early hours of 2 November, when he opened Operation Supercharge, with a massive artillery barrage. Due to heavy losses in tanks, towards the end of the day, Rommel ordered his forces to disengage and begin to withdraw. At midnight, he informed the OKW of his decision, and received a reply directly from Hitler the following afternoon: he ordered Rommel and his troops to hold their position to the last man. Rommel, who believed that the lives of his soldiers should never be squandered needlessly, was stunned. Rommel initially complied with the order, but after discussions with Kesselring and others, he issued orders for a retreat on 4 November. The delay proved costly in terms of his ability to get his forces out of Egypt. He later said the decision to delay was what he most regretted from his time in Africa. Meanwhile, the British 1st and 7th Armoured Division had broken through the German defences and were preparing to swing north and surround the Axis forces. On the evening of the 4th, Rommel finally received word from Hitler authorising the withdrawal.", "title": "World War II" }, { "paragraph_id": 47, "text": "As Rommel attempted to withdraw his forces before the British could cut off his retreat, he fought a series of delaying actions. Heavy rains slowed movements and grounded the Desert Air Force, which aided the withdrawal, yet Rommel's troops were under pressure from the pursuing Eighth Army and had to abandon the trucks of the Italian forces, leaving them behind. Rommel continued to retreat west, aiming for 'Gabes gap' in Tunisia. Kesselring strongly criticised Rommel's decision to retreat all the way to Tunisia, as each airfield the Germans abandoned extended the range of the Allied bombers and fighters. Rommel defended his decision, pointing out that if he tried to assume a defensive position the Allies would destroy his forces and take the airfields anyway; the retreat saved the lives of his remaining men and shortened his supply lines. By now, Rommel's remaining forces fought in reduced strength combat groups, whereas the Allied forces had great numerical superiority and control of the air. On his arrival in Tunisia, Rommel noted with some bitterness the reinforcements, including the 10th Panzer Division, arriving in Tunisia following the Allied invasion of Morocco.", "title": "World War II" }, { "paragraph_id": 48, "text": "Having reached Tunisia, Rommel launched an attack against the U.S. II Corps which was threatening to cut his lines of supply north to Tunis. Rommel inflicted a sharp defeat on the American forces at the Kasserine Pass in February, his last battlefield victory of the war, and his first engagement against the United States Army.", "title": "World War II" }, { "paragraph_id": 49, "text": "Rommel immediately turned back against the British forces, occupying the Mareth Line (old French defences on the Libyan border). While Rommel was at Kasserine at the end of January 1943, the Italian General Giovanni Messe was appointed commander of Panzer Army Africa, renamed the Italo-German Panzer Army in recognition of the fact that it consisted of one German and three Italian corps. Though Messe replaced Rommel, he diplomatically deferred to him, and the two coexisted in what was theoretically the same command. On 23 February Army Group Afrika was created with Rommel in command. It included the Italo-German Panzer Army under Messe (renamed 1st Italian Army) and the German 5th Panzer Army in the north of Tunisia under General Hans-Jürgen von Arnim.", "title": "World War II" }, { "paragraph_id": 50, "text": "The last Rommel offensive in North Africa was on 6 March 1943, when he attacked Eighth Army at the Battle of Medenine. The attack was made with 10th, 15th, and 21st Panzer Divisions. Alerted by Ultra intercepts, Montgomery deployed large numbers of anti-tank guns in the path of the offensive. After losing 52 tanks, Rommel called off the assault. On 9 March he returned to Germany. Command was handed over to General Hans-Jürgen von Arnim. Rommel never returned to Africa. The fighting there continued on for another two months, until 13 May 1943, when Messe surrendered the army group to the Allies.", "title": "World War II" }, { "paragraph_id": 51, "text": "On 23 July 1943, Rommel was moved to Greece as commander of Army Group E to counter a possible British invasion. He arrived in Greece on 25 July but was recalled to Berlin the same day following Mussolini's dismissal from office. This caused the German High Command to review the defensive integrity of the Mediterranean and it was decided that Rommel should be posted to Italy as commander of the newly formed Army Group B. On 16 August 1943, Rommel's headquarters moved to Lake Garda in northern Italy and he formally assumed command of the group, consisting of the 44th Infantry Division, the 26th Panzer Division and the 1st SS Panzer Division Leibstandarte SS Adolf Hitler. When Italy announced its armistice with the Allies on 8 September, Rommel's group took part in Operation Achse, disarming the Italian forces.", "title": "World War II" }, { "paragraph_id": 52, "text": "Hitler met with Rommel and Kesselring to discuss future operations in Italy on 30 September 1943. Rommel insisted on a defensive line north of Rome, while Kesselring was more optimistic and advocated holding a line south of Rome. Hitler preferred Kesselring's recommendation, and therefore revoked his previous decision for the subordination of Kesselring's forces to Rommel's army group. On 19 October, Hitler decided that Kesselring would be the overall commander of the forces in Italy, sidelining Rommel.", "title": "World War II" }, { "paragraph_id": 53, "text": "Rommel had wrongly predicted that the collapse of the German line in Italy would be fast. On 21 November, Hitler gave Kesselring overall command of the Italian theatre, moving Rommel and Army Group B to Normandy in France with responsibility for defending the French coast against the long anticipated Allied invasion.", "title": "World War II" }, { "paragraph_id": 54, "text": "On 4 November 1943, Rommel became General Inspector of the Western Defences. He was given a staff that befitted an army group commander, and the powers to travel, examine and make suggestions on how to improve the defences. Hitler, who was having a disagreement with him over military matters, intended to use Rommel as a psychological trump card.", "title": "World War II" }, { "paragraph_id": 55, "text": "There was broad disagreement in the German High Command as to how best to meet the expected allied invasion of Northern France. The Commander-in-Chief West, Gerd von Rundstedt, believed there was no way to stop the invasion near the beaches because of the Allied navies' firepower, as had been experienced at Salerno. He argued that the German armour should be held in reserve well inland near Paris, where they could be used to counter-attack in force in a more traditional military doctrine. The allies could be allowed to extend themselves deep into France, where a battle for control would be fought, allowing the Germans to envelop the allied forces in a pincer movement, cutting off their avenue of retreat. He feared the piecemeal commitment of their armoured forces would cause them to become caught in a battle of attrition which they could not hope to win.", "title": "World War II" }, { "paragraph_id": 56, "text": "The notion of holding the armour inland to use as a mobile reserve force from which they could mount a powerful counterattack applied the classic use of armoured formations as seen in France in 1940. These tactics were still effective on the Eastern Front, where control of the air was important but did not dominate the action. Rommel's own experiences at the end of the North African campaign revealed to him that the Germans would not be able to preserve their armour from air attack for this type of massed assault. Rommel believed their only opportunity would be to oppose the landings directly at the beaches, and to counterattack there before the invaders could become well established. Though there had been some defensive positions established and gun emplacements made, the Atlantic Wall was a token defensive line. Rundstedt had confided to Rommel that it was for propaganda purposes only.", "title": "World War II" }, { "paragraph_id": 57, "text": "Upon arriving in Northern France Rommel was dismayed by the lack of completed works. According to Ruge, Rommel was in a staff position and could not issue orders, but he took every effort to explain his plan to commanders down to the platoon level, who took up his words eagerly, but \"more or less open\" opposition from the above slowed down the process. Rundstedt intervened and supported Rommel's request for being made a commander. It was granted on 15 January 1944.", "title": "World War II" }, { "paragraph_id": 58, "text": "He and his staff set out to improve the fortifications along the Atlantic Wall with great energy and engineering skill. This was a compromise: Rommel now commanded the 7th and 15th armies; he also had authority over a 20-kilometer-wide strip of coastal land between Zuiderzee and the mouth of the Loire. The chain of command was convoluted: the air force and navy had their own chiefs, as did the South and Southwest France and the Panzer group; Rommel also needed Hitler's permissions to use the tank divisions. Rommel had millions of mines laid and thousands of tank traps and obstacles set up on the beaches and throughout the countryside, including in fields suitable for glider aircraft landings, the so-called Rommel's asparagus (the Allies would later counter these with Hobart's Funnies). In April 1944, Rommel promised Hitler that the preparations would be complete by 1 May, a promise he failed to deliver. By the time of the Allied invasion, the preparations were far from finished. The quality of some of the troops manning them was poor and many bunkers lacked sufficient stocks of ammunition.", "title": "World War II" }, { "paragraph_id": 59, "text": "Rundstedt expected the Allies to invade in the Pas-de-Calais because it was the shortest crossing point from Britain, its port facilities were essential to supplying a large invasion force, and the distance from Calais to Germany was relatively short. Rommel and Hitler's views on the matter is a matter of debate between authors, with both seeming to change their positions.", "title": "World War II" }, { "paragraph_id": 60, "text": "Hitler vacillated between the two strategies. In late April, he ordered the I SS Panzer Corps placed near Paris, far enough inland to be useless to Rommel, but not far enough for Rundstedt. Rommel moved those armoured formations under his command as far forward as possible, ordering General Erich Marcks, commanding the 84th Corps defending the Normandy section, to move his reserves into the frontline. Rundstedt was willing to delegate a majority of the responsibilities to Rommel (the central reserve was Rundstedt's idea but he did not oppose some form of coastal defence), Rommel's strategy of an armour-supported coastal defence line was opposed by some officers, most notably Leo Geyr von Schweppenburg, who was supported by Guderian. Hitler compromised and gave Rommel three divisions (the 2nd, the 21st and the 116th Panzer), let Rundstedt retain four and turned the other three to Army Group G, pleasing no one.", "title": "World War II" }, { "paragraph_id": 61, "text": "The Allies staged elaborate deceptions for D-Day (see Operation Fortitude), giving the impression that the landings would be at Calais. Although Hitler himself expected a Normandy invasion for a while, Rommel and most Army commanders in France believed there would be two invasions, with the main invasion coming at the Pas-de-Calais. Rommel drove defensive preparations all along the coast of Northern France, particularly concentrating fortification building in the River Somme estuary. By D-Day on 6 June 1944 nearly all the German staff officers, including Hitler's staff, believed that Pas-de-Calais was going to be the main invasion site, and continued to believe so even after the landings in Normandy had occurred.", "title": "World War II" }, { "paragraph_id": 62, "text": "The 5 June storm in the channel seemed to make a landing very unlikely, and a number of the senior officers left their units for training exercises and various other efforts. On 4 June the chief meteorologist of the 3 Air Fleet reported that weather in the channel was so poor there could be no landing attempted for two weeks. On 5 June, Rommel left France and on 6 June, he was at home celebrating his wife's 50th birthday. He was recalled and returned to his headquarters at 10 pm. Meanwhile, earlier in the day, Rundstedt had requested the reserves be transferred to his command. At 10 am Keitel advised that Hitler declined to release the reserves but that Rundstedt could move the 12th SS Panzer Division Hitlerjugend closer to the coast, with the Panzer-Lehr-Division placed on standby. Later in the day, Rundstedt received authorisation to move additional units in preparation for a counterattack, which Rundstedt decided to launch on 7 June. Upon arrival, Rommel concurred with the plan. By nightfall, Rundstedt, Rommel and Speidel continued to believe that the Normandy landing might have been a diversionary attack, as the Allied deception measures still pointed towards Calais. The 7 June counterattack did not take place because Allied air bombardments prevented the 12th SS's timely arrival. All this made the German command structure in France in disarray during the opening hours of the D-Day invasion.", "title": "World War II" }, { "paragraph_id": 63, "text": "The Allies secured five beachheads by nightfall of 6 June, landing 155,000 troops. The Allies pushed ashore and expanded their beachhead despite strong German resistance. Rommel believed that if his armies pulled out of range of Allied naval fire, it would give them a chance to regroup and re-engage them later with a better chance of success. While he managed to convince Rundstedt, they still needed to win over Hitler. At a meeting with Hitler at his Wolfsschlucht II headquarters in Margival in northern France on 17 June, Rommel warned Hitler about the inevitable collapse in the German defences, but was rebuffed and told to focus on military operations.", "title": "World War II" }, { "paragraph_id": 64, "text": "By mid-July the German position was crumbling. On 17 July 1944, as Rommel was returning from visiting the headquarters of the I SS Panzer Corps, a fighter plane piloted by either Charley Fox of 412 Squadron RCAF, Jacques Remlinger of No. 602 Squadron RAF, or Johannes Jacobus le Roux of No. 602 Squadron RAF strafed his staff car near Sainte-Foy-de-Montgommery. The driver sped up and attempted to get off the main roadway, but a 20 mm round shattered his left arm, causing the vehicle to veer off the road and crash into trees. Rommel was thrown from the car, suffering injuries to the left side of his face from glass shards and three fractures to his skull. He was hospitalised with major head injuries (assumed to be almost certainly fatal).", "title": "World War II" }, { "paragraph_id": 65, "text": "The role that Rommel played in the military's resistance against Hitler or the 20 July plot is difficult to ascertain, as most of the leaders who were directly involved did not survive and limited documentation on the conspirators' plans and preparations exists. One piece of evidence that points to the possibility that Rommel came to support the assassination plan was General Eberbach's confession to his son (eavesdropped on by British agencies) while in British captivity which stated that Rommel explicitly said to him that Hitler and his close associates had to be killed because this would be the only way out for Germany. This conversation occurred about a month before Rommel was coerced into suicide.", "title": "World War II" }, { "paragraph_id": 66, "text": "Other notable evidence includes the papers of Rudolf Hartmann (who survived the later purge) and Carl-Heinrich von Stülpnagel, who were among the leaders of the military resistance (alongside Rommel's chief of staff General Hans Speidel, Colonel Karl-Richard Koßmann, Colonel Eberhard Finckh and Lieutenant Colonel Caesar von Hofacker). These papers, accidentally discovered by historian Christian Schweizer in 2018 while doing research on Rudolf Hartmann, include Hartmann's eyewitness account of a conversation between Rommel and Stülpnagel in May 1944, as well as photos of the mid-May 1944 meeting between the inner circle of the resistance and Rommel at Koßmann's house. According to Hartmann, by the end of May, in another meeting at Hartmann's quarters in Mareil–Marly, Rommel showed \"decisive determination\" and clear approval of the inner circle's plan. In a post-war account by Karl Strölin, three of Rommel's friends—the Oberbürgermeister of Stuttgart, Strölin (who had served with Rommel in the First World War), Alexander von Falkenhausen and Stülpnagel—began efforts to bring Rommel into the anti-Hitler conspiracy in early 1944. According to Strölin, sometime in February, Rommel agreed to lend his support to the resistance.", "title": "World War II" }, { "paragraph_id": 67, "text": "On 15 April 1944, Rommel's new chief of staff, Hans Speidel, arrived in Normandy and reintroduced Rommel to Stülpnagel. Speidel had previously been connected to Carl Goerdeler, the civilian leader of the resistance, but not to the plotters led by Claus von Stauffenberg, and came to Stauffenberg's attention only upon his appointment to Rommel's headquarters. The conspirators felt they needed the support of a field marshal on active duty. Erwin von Witzleben, who would have become commander-in-chief of the Wehrmacht had the plot succeeded, was a field marshal, but had been inactive since 1942. The conspirators gave instructions to Speidel to bring Rommel into their circle. Speidel met with former foreign minister Konstantin von Neurath and Strölin on 27 May in Germany, ostensibly at Rommel's request, although the latter was not present. Neurath and Strölin suggested opening immediate surrender negotiations in the West, and, according to Speidel, Rommel agreed to further discussions and preparations. Around the same timeframe, the plotters in Berlin were not aware that Rommel had allegedly decided to take part in the conspiracy. On 16 May, they informed Allen Dulles, through whom they hoped to negotiate with the Western Allies, that Rommel could not be counted on for support.", "title": "World War II" }, { "paragraph_id": 68, "text": "At least initially, Rommel opposed assassinating Hitler. According to some authors, he gradually changed his attitude. After the war, his widow—among others—maintained that Rommel believed an assassination attempt would spark civil war in Germany and Austria, and Hitler would have become a martyr for a lasting cause. Instead, Rommel reportedly suggested that Hitler be arrested and brought to trial for his crimes; he did not attempt to implement this plan when Hitler visited Margival, France, on 17 June. The arrest plan would have been highly improbable as Hitler's security was extremely tight. Rommel would have known this, having commanded Hitler's army protection detail in 1939. He was in favour of peace negotiations and repeatedly urged Hitler to negotiate with the Allies which is dubbed by some as \"hopelessly naive\" considering no one would trust Hitler. \"As naive as it was idealistic, the attitude he showed to the man he had sworn loyalty\".", "title": "World War II" }, { "paragraph_id": 69, "text": "According to Reuth, the reason Lucie Rommel did not want her husband to be associated with any conspiracy was that even after the war, the German population neither grasped nor wanted to comprehend the reality of the genocide, thus conspirators were still treated as traitors and outcasts. On the other hand, the resistance depended on the reputation of Rommel to win over the population. Some officers who had worked with Rommel also recognised the relationship between Rommel and the resistance: Westphal said that Rommel did not want any more senseless sacrifices. Butler, using Ruge's recollections, reports that when told by Hitler himself that \"no one will make peace with me\", Rommel told Hitler that if he was the obstacle for peace, he should resign or kill himself, but Hitler insisted on fanatical defence.", "title": "World War II" }, { "paragraph_id": 70, "text": "Reuth, based on Jodl's testimony, reports that Rommel forcefully presented the situation and asked for political solutions from Hitler, who rebuffed that Rommel should leave politics to him. Brighton comments that Rommel seemed devoted, even though he did not have much faith in Hitler anymore considering he kept informing Hitler in person and by letter about his changing beliefs despite facing a military dilemma as well as a personal struggle. Lieb remarks that Rommel's attitude in describing the situation honestly and requiring political solutions was almost without precedent and contrary to the attitude of many other generals. Remy comments that Rommel put himself and his family (which he had briefly considered evacuating to France, but refrained from doing so) at risk for the resistance out of a combination of his concern for the fate of Germany, his indignation at atrocities and the influence of people around him.", "title": "World War II" }, { "paragraph_id": 71, "text": "On 15 July, Rommel wrote a letter to Hitler giving him a \"last chance\" to end the hostilities with the Western Allies, urging Hitler to \"draw the proper conclusions without delay\". What Rommel did not know was that the letter took two weeks to reach Hitler because of Kluge's precautions. Various authors report that many German generals in Normandy, including some SS officers like Hausser, Bittrich, Dietrich (a hard-core Nazi and Hitler's long-time supporter) and Rommel's former opponent Geyr von Schweppenburg, pledged support to him even against Hitler's orders, while Kluge supported him with much hesitation. Rundstedt encouraged Rommel to carry out his plans but refused to do anything himself, remarking that it had to be a man who was still young and loved by the people, while Erich von Manstein was also approached by Rommel but categorically refused, although he did not report them to Hitler either. Peter Hoffmann reports that he also attracted into his orbit officials who had previously refused to support the conspiracy, like Julius Dorpmüller and Karl Kaufmann (according to Russell A. Hart, reliable details of the conversations are now lost, although they certainly met).", "title": "World War II" }, { "paragraph_id": 72, "text": "On 17 July 1944, Rommel was incapacitated by an Allied air attack, which many authors describe as a fateful event that drastically altered the outcome of the bomb plot. Writer Ernst Jünger commented: \"The blow that felled Rommel ... robbed the plan of the shoulders that were to be entrusted the double weight of war and civil war - the only man who had enough naivety to counter the simple terror that those he was about to go against possessed.\" After the failed bomb attack of 20 July, many conspirators were arrested and the dragnet expanded to thousands. Rommel was first implicated when Stülpnagel, after his suicide attempt, repeatedly muttered \"Rommel\" in delirium. Under torture, Hofacker named Rommel as one of the participants. Additionally, Goerdeler had written down Rommel's name on a list as potential Reich President (according to Stroelin. They had not managed to announce this intention to Rommel yet and he probably never heard of it until the end of his life).", "title": "World War II" }, { "paragraph_id": 73, "text": "On 27 September, Martin Bormann submitted to Hitler a memorandum which claimed that \"the late General Stülpnagel, Colonel Hofacker, Kluge's nephew who has been executed, Lieutenant Colonel Rathgens, and several ... living defendants have testified that Field Marshal Rommel was perfectly in the picture about the assassination plan and has promised to be at the disposal of the New Government.\" Gestapo agents were sent to Rommel's house in Ulm and placed him under surveillance. Historian Peter Lieb considers the memorandum, as well as Eberbach's conversation and the testimonies of surviving resistance members (including Hartmann), to be the three key sources that indicate Rommel's support of the assassination plan. He further notes that while Speidel had an interest in promoting his own post-war career, his testimonies should not be dismissed, considering his bravery as an early resistance figure. Remy writes that even more important than Rommel's attitude to the assassination is the fact Rommel had his own plan to end the war. He began to contemplate this plan some months after El Alamein and carried it out with a lonely decision and conviction, and in the end, had managed to bring military leaders in the West to his side.", "title": "World War II" }, { "paragraph_id": 74, "text": "Rommel's case was turned over to the \"Court of Military Honour\"—a drumhead court-martial convened to decide the fate of officers involved in the conspiracy. The court included Generalfeldmarschall Wilhelm Keitel, Generalfeldmarschall Gerd von Rundstedt, Generaloberst Heinz Guderian, General der Infanterie Walther Schroth and Generalleutnant Karl-Wilhelm Specht, with General der Infanterie Karl Kriebel and Generalleutnant Heinrich Kirchheim (whom Rommel had fired after Tobruk in 1941) as deputy members and Generalmajor Ernst Maisel as protocol officer. The Court acquired information from Speidel, Hofacker and others that implicated Rommel, with Keitel and Ernst Kaltenbrunner assuming that he had taken part in the subversion. Keitel and Guderian then made the decision that favoured Speidel's case and at the same time shifted the blame to Rommel. By normal procedure, this would lead to Rommel's being brought to Roland Freisler's People's Court, a kangaroo court that always decided in favour of the prosecution. However, Hitler knew that having Rommel branded and executed as a traitor would severely damage morale on the home front. He thus decided to offer Rommel the chance to take his own life.", "title": "Death" }, { "paragraph_id": 75, "text": "Two generals from Hitler's headquarters, Wilhelm Burgdorf and Ernst Maisel, visited Rommel at his home on 14 October 1944. Burgdorf informed him of the charges against him and offered him three options: (a.) he could choose to defend himself personally in front of Hitler in Berlin, or if he refused to do so (which would be taken as an admission of guilt); (b.) he could face the People's Court (which would have been tantamount to a death sentence), or (c.) choose death by suicide. In the former case (b.), his family would have suffered even before the all-but-certain conviction and execution, and his staff would have been arrested and executed as well. In the latter case (c.), the government would claim that he died a hero and bury him with full military honours, and his family would receive full pension payments. In support of the suicide option, Burgdorf had brought a cyanide capsule.", "title": "Death" }, { "paragraph_id": 76, "text": "Rommel chose suicide, and explained his decision to his wife and son. Wearing his Afrika Korps jacket and carrying his field marshal's baton, he got into Burgdorf's car, driven by SS-Stabsscharführer Heinrich Doose, and was driven out of the village. After stopping, Doose and Maisel walked away from the car leaving Rommel with Burgdorf. Five minutes later Burgdorf gestured to the two men to return to the car, and Doose noticed that Rommel was slumped over, having taken the cyanide. He died before being taken to the Wagner-Schule field hospital. Ten minutes later, the group telephoned Rommel's wife to inform her of his death.", "title": "Death" }, { "paragraph_id": 77, "text": "The official notice of Rommel's death as reported to the public stated that he had died of either a heart attack or a cerebral embolism—a complication of the skull fractures he had suffered in the earlier strafing of his staff car. To strengthen the story, Hitler ordered an official day of mourning in commemoration of his death. As promised, Rommel was given a state funeral but it was held in Ulm instead of Berlin as had been requested by Rommel. Hitler sent Field Marshal Rundstedt (who was unaware that Rommel had died as a result of Hitler's orders) as his representative to the funeral.", "title": "Death" }, { "paragraph_id": 78, "text": "The truth behind Rommel's death became known to the Allies when intelligence officer Charles Marshall interviewed Rommel's widow, Lucia Rommel, as well as from a letter by Rommel's son Manfred in April 1945.", "title": "Death" }, { "paragraph_id": 79, "text": "Rommel's grave is located in Herrlingen, a short distance west of Ulm. For decades after the war on the anniversary of his death, veterans of the Africa campaign, including former opponents, would gather at his tomb in Herrlingen.", "title": "Death" }, { "paragraph_id": 80, "text": "On the Italian front in the First World War, Rommel was a successful tactician in fast-developing mobile battle and this shaped his subsequent style as a military commander. He found that taking initiative and not allowing the enemy forces to regroup led to victory. Some authors argue that his enemies were often less organised, second-rate, or depleted, and his tactics were less effective against adequately led, trained and supplied opponents and proved insufficient in the later years of the war. Others point out that through his career, he frequently fought while out-numbered and out-gunned, sometimes overwhelmingly so, while having to deal with internal opponents in Germany who hoped that he would fail.", "title": "Style as military commander" }, { "paragraph_id": 81, "text": "Rommel is praised by numerous authors as a great leader of men. The historian and journalist Basil Liddell Hart concludes that he was a strong leader worshipped by his troops, respected by his adversaries and deserving to be named as one of the \"Great Captains of History\". Owen Connelly concurs, writing that \"No better exemplar of military leadership can be found\" and quoting Friedrich von Mellenthin on the inexplicable mutual understanding that existed between Rommel and his troops. Hitler, though, remarked that, \"Unfortunately Field-Marshal Rommel is a very great leader full of drive in times of success, but an absolute pessimist when he meets the slightest problems.\" Telp criticises Rommel for not extending the benevolence he showed in promoting his own officers' careers to his peers, whom he ignored or slighted in his reports.", "title": "Style as military commander" }, { "paragraph_id": 82, "text": "Taking his opponents by surprise and creating uncertainty in their minds were key elements in Rommel's approach to offensive warfare: he took advantage of sand storms and the dark of night to conceal the movement of his forces. He was aggressive and often directed battle from the front or piloted a reconnaissance aircraft over the lines to get a view of the situation. When the British mounted a commando raid deep behind German lines in an effort to kill Rommel and his staff on the eve of their Crusader offensive, Rommel was indignant that the British expected to find his headquarters 400 kilometres (250 miles) behind his front. Mellenthin and Harald Kuhn write that at times in North Africa his absence from a position of communication made command of the battles of the Afrika Korps difficult. Mellenthin lists Rommel's counterattack during Operation Crusader as one such instance. Butler concurred, saying that leading from the front is a good concept but Rommel took it so far – he frequently directed the actions of a single company or battalion – that he made communication and coordination between units problematic, as well as risking his life to the extent that he could easily have been killed even by his own artillery. Albert Kesselring also complained about Rommel cruising about the battlefield like a division or corps commander; but Gause and Westphal, supporting Rommel, replied that in the African desert only this method would work and that it was useless to try to restrain Rommel anyway. His staff officers, although admiring towards their leader, complained about the self-destructive Spartan lifestyle that made life harder, diminished his effectiveness and forced them to \"bab[y] him as unobtrusively as possible\".", "title": "Style as military commander" }, { "paragraph_id": 83, "text": "For his leadership during the French campaign Rommel received both praise and criticism. Many, such as General Georg Stumme, who had previously commanded 7th Panzer Division, were impressed with the speed and success of Rommel's drive. Others were reserved or critical: Kluge, his commanding officer, argued that Rommel's decisions were impulsive and that he claimed too much credit, by falsifying diagrams or by not acknowledging contributions of other units, especially the Luftwaffe. Some pointed out that Rommel's division took the highest casualties in the campaign. Others point out that in exchange for 2,160 casualties and 42 tanks, it captured more than 100,000 prisoners and destroyed nearly two divisions' worth of enemy tanks (about 450 tanks), vehicles and guns.", "title": "Style as military commander" }, { "paragraph_id": 84, "text": "Rommel spoke German with a pronounced southern German or Swabian accent. He was not a part of the Prussian aristocracy that dominated the German high command, and as such was looked upon somewhat suspiciously by the Wehrmacht's traditional power structure. Rommel felt a commander should be physically more robust than the troops he led, and should always show them an example. He expected his subordinate commanders to do the same.", "title": "Style as military commander" }, { "paragraph_id": 85, "text": "Rommel was direct, unbending, tough in his manners, to superiors and subordinates alike, disobedient even to Hitler whenever he saw fit, although gentle and diplomatic to the lower ranks. Despite being publicity-friendly, he was also shy, introverted, clumsy and overly formal even to his closest aides, judging people only on their merits, although loyal and considerate to those who had proved reliability, and he displayed a surprisingly passionate and devoted side to a very small few (including Hitler) with whom he had dropped the seemingly impenetrable barriers.", "title": "Style as military commander" }, { "paragraph_id": 86, "text": "Rommel's relationship with the Italian High Command in North Africa was generally poor. Although he was nominally subordinate to the Italians, he enjoyed a certain degree of autonomy from them; since he was directing their troops in battle as well as his own, this was bound to cause hostility among Italian commanders. Conversely, as the Italian command had control over the supplies of the forces in Africa, they resupplied Italian units preferentially, which was a source of resentment for Rommel and his staff. Rommel's direct and abrasive manner did nothing to smooth these issues.", "title": "Style as military commander" }, { "paragraph_id": 87, "text": "While certainly much less proficient than Rommel in their leadership, aggression, tactical outlook and mobile warfare skills, Italian commanders were competent in logistics, strategy and artillery doctrine: their troops were ill-equipped but well-trained. As such, the Italian commanders were repeatedly at odds with Rommel over concerns with issues of supply. Field Marshal Kesselring was assigned Supreme Commander Mediterranean, at least in part to alleviate command problems between Rommel and the Italians. This effort resulted only in partial success, with Kesselring's own relationship with the Italians being unsteady and Kesselring claiming Rommel ignored him as readily as he ignored the Italians. Rommel often went directly to Hitler with his needs and concerns, taking advantage of the favouritism that the Führer displayed towards him and adding to the distrust that Kesselring and the German High Command already had of him.", "title": "Style as military commander" }, { "paragraph_id": 88, "text": "According to Scianna, opinion among the Italian military leaders was not unanimous. In general, Rommel was a target of criticism and a scapegoat for defeat rather than a glorified figure, with certain generals also trying to replace him as the heroic leader or hijack the Rommel myth for their own benefit. Nevertheless, he never became a hated figure, although the \"abandonment myth\", despite being repudiated by officers of the X Corps themselves, was long-lived. Many found Rommel's chaotic leadership and emotional character hard to work with, yet the Italians held him in higher regard than other German senior commanders, militarily and personally.", "title": "Style as military commander" }, { "paragraph_id": 89, "text": "Very different, however, was the perception of Rommel by Italian common soldiers and NCOs, who, like the German field troops, had the deepest trust and respect for him. Paolo Colacicchi, an officer in the Italian Tenth Army recalled that Rommel \"became sort of a myth to the Italian soldiers\". Rommel himself held a much more generous view about the Italian soldier than about their leadership, towards whom his disdain, deeply rooted in militarism, was not atypical, although unlike Kesselring he was incapable of concealing it. Unlike many of his superiors and subordinates who held racist views, he was usually \"kindly disposed\" to the Italians in general.", "title": "Style as military commander" }, { "paragraph_id": 90, "text": "James J. Sadkovich cites examples of Rommel abandoning his Italian units, refusing cooperation, rarely acknowledging their achievements and other improper behaviour towards his Italian allies, Giuseppe Mancinelli, who was liaison between German and Italian command, accused Rommel of blaming Italians for his own errors. Sadkovich names Rommel as arrogantly ethnocentric and disdainful towards Italians.", "title": "Style as military commander" }, { "paragraph_id": 91, "text": "Many authors describe Rommel as having a reputation of being a chivalrous, humane, and professional officer, and that he earned the respect of both his own troops and his enemies. Gerhard Schreiber quotes Rommel's orders, issued together with Kesselring: \"Sentimentality concerning the Badoglio following gangs (\"Banden\" in the original, indicating a mob-like crowd) in the uniforms of the former ally is misplaced. Whoever fights against the German soldier has lost any right to be treated well and shall experience toughness reserved for the rabble which betrays friends. Every member of the German troop has to adopt this stance.\" Schreiber writes that this exceptionally harsh and, according to him, \"hate fuelled\" order brutalised the war and was clearly aimed at Italian soldiers, not just partisans. Dennis Showalter writes that \"Rommel was not involved in Italy's partisan war, though the orders he issued prescribing death for Italian soldiers taken in arms and Italian civilians sheltering escaped British prisoners do not suggest he would have behaved significantly different from his Wehrmacht counterparts.\"", "title": "Style as military commander" }, { "paragraph_id": 92, "text": "According to Maurice Remy, orders issued by Hitler during Rommel's stay in a hospital resulted in massacres in the course of Operation Achse, disarming the Italian forces after the armistice with the Allies in 1943. Remy also states that Rommel treated his Italian opponents with his usual fairness, requiring that the prisoners should be accorded the same conditions as German civilians. Remy opines that an order in which Rommel, in contrast to Hitler's directives, called for no \"sentimental scruples\" against \"Badoglio-dependent bandits in uniforms of the once brothers-in-arms\" should not be taken out of context. Peter Lieb agrees that the order did not radicalise the war and that the disarmament in Rommel's area of responsibility happened without major bloodshed. Italian internees were sent to Germany for forced labour, but Rommel was unaware of this. Klaus Schmider comments that the writings of Lieb and others succeed in vindicating Rommel \"both with regards to his likely complicity in the July plot as well as his repeated refusal to carry out illegal orders.\" Rommel withheld Hitler's Commando Order to execute captured commandos from his Army Group B, with his units reporting that they were treating commandos as regular POWs. It is likely that he had acted similarly in North Africa. Historian Szymon Datner argues that Rommel may have been simply trying to conceal the atrocities of Nazi Germany from the Allies. Remy states that although Rommel had heard rumours about massacres while fighting in Africa, his personality, combined with special circumstances, meant that he was not fully confronted with the reality of atrocities before 1944. When Rommel learned about the atrocities that SS Division Leibstandarte committed in Italy in September 1943, he allegedly forbade his son from joining the Waffen-SS.", "title": "Style as military commander" }, { "paragraph_id": 93, "text": "By the time of the Second World War, French colonial troops were portrayed as a symbol of French depravity in Nazi propaganda; Canadian historian Myron Echenberg writes that Rommel, just like Hitler, viewed black French soldiers with particular disdain. According to author Ward Rutherford, Rommel also held racist views towards British colonial troops from India; Rutherford in his The biography of Field Marshal Erwin Rommel writes: \"Not even his most sycophantic apologists have been able to evade the conclusion, fully demonstrated by his later behavior, that Rommel was a racist who, for example, thought it desperately unfair that the British should employ 'black' – by which he meant Indian – troops against a white adversary.\" Vaughn Raspberry writes that Rommel and other officers considered it an insult to fight against black Africans because they considered black people to be members of \"inferior races\".", "title": "Style as military commander" }, { "paragraph_id": 94, "text": "Bruce Watson comments that whatever racism Rommel might have had in the beginning, it was washed away when he fought in the desert. When he saw that they were fighting well, he gave the members of the 4th Division of the Indian Army high praise. Rommel and the Germans acknowledge the Gurkhas' fighting ability, although their style leaned more towards ferocity. Once he witnessed German soldiers with throats cut by a khukri knife. Originally, he did not want Chandra Bose's Indian formation (composed of the Allied Indian soldiers), captured by his own troops, to work under his command. In Normandy though, when they had already become the Indische Freiwilligen Legion der Waffen SS, he visited them and praised them for their efforts (while they still suffered general disrespect within the Wehrmacht). A review on Rutherford's book by the Pakistan Army Journal says that the statement is one of many that Rutherford uses, which lack support in authority and analysis. Rommel saying that using the Indians was unfair should also be put in perspective, considering the disbandment of the battle-hardened 4th Division by the Allies. Rommel praised the colonial troops in the battle of France: \"The (French) colonial troops fought with extraordinary determination. The anti-tank teams and tank crews performed with courage and caused serious losses.\" though that might be an example of generals honouring their opponents so that \"their own victories appear the more impressive.\" Reuth comments that Rommel ensured that he and his command would act decently (shown by his treatment of the Free French prisoners who were considered partisans by Hitler, the Jews and the coloured men), while he was distancing himself from Hitler's racist war in the East and deluding himself into believing that Hitler was good, only the Party big shots were evil. The black South African soldiers recount that when they were held as POWs after they were captured by Rommel, they initially slept and queued for food away from the whites, until Rommel saw this and told them that brave soldiers should all queue together. Finding this strange coming from a man fighting for Hitler, they adopted this behaviour until they went back to the Union of South Africa, where they were separated again.", "title": "Style as military commander" }, { "paragraph_id": 95, "text": "There are reports that Rommel acknowledged the Maori soldiers' fighting skills, yet at the same time he complained about their methods which were unfair from the European perspective. When he asked the commander of the New Zealand 6th Infantry Brigade about his division's massacres of the wounded and POWs, the commander attributed these incidents to the Maoris in his unit. Hew Strachan notes that lapses in practising the warriors' code of war were usually attributed to ethnic groups which lived outside Europe with the implication that those ethnic groups which lived in Europe knew how to behave (although Strachan opines that such attributions were probably true). Nevertheless, according to the website of the 28th Maori Battalion, Rommel always treated them fairly and he also showed understanding with regard to war crimes.", "title": "Style as military commander" }, { "paragraph_id": 96, "text": "Some authors cite, among other cases, Rommel's naive reaction to events in Poland while he was there: he paid a visit to his wife's uncle, famous Polish priest and patriotic leader, Edmund Roszczynialski who was murdered within days, but Rommel never understood this and, at his wife's urgings, kept writing letter after letter to Himmler's adjutants asking them to keep track and take care of their relative. Knopp and Mosier agree that he was naive politically, citing his request for a Jewish Gauleiter in 1943. Despite this, Lieb finds it hard to believe that a man in Rommel's position could have known nothing about atrocities, while accepting that locally he was separated from the places where these atrocities occurred. Der Spiegel comments that Rommel was simply in denial about what happened around him. Alaric Searle points out that it was the early diplomatic successes and bloodless expansion that blinded Rommel to the true nature of his beloved Führer, whom he then naively continued to support. Scheck believes it may be forever unclear whether Rommel recognised the unprecedented depraved character of the regime.", "title": "Style as military commander" }, { "paragraph_id": 97, "text": "Historian Richard J. Evans has stated that German soldiers in Tunisia raped Jewish women, and the success of Rommel's forces in capturing or securing Allied, Italian and Vichy French territory in North Africa led to many Jews in these areas being killed by other German institutions as part of the Holocaust. Anti-Jewish and Anti-Arab violence erupted in North Africa when Rommel and Ettore Bastico regained territory there in February 1941 and then again in April 1942. While committed by Italian forces, Patrick Bernhard writes \"the Germans were aware of Italian reprisals behind the front lines. Yet, perhaps surprisingly, they seem to have exercised little control over events.", "title": "Style as military commander" }, { "paragraph_id": 98, "text": "The German consul general in Tripoli consulted with Italian state and party officials about possible countermeasures against the natives, but this was the full extent of German involvement. Rommel did not directly intervene, though he advised the Italian authorities to do whatever was necessary to eliminate the danger of riots and espionage; for the German general, the rear areas were to be kept \"quiet\" at all costs. Thus, according to Bernhard, although he had no direct hand in the atrocities, Rommel made himself complicit in war crimes by failing to point out that international laws of war strictly prohibited certain forms of retaliation. By giving carte blanche to the Italians, Rommel implicitly condoned, and perhaps even encouraged, their war crimes\". Gershom reports that the recommendation came from officers \"speaking for Rommel\", and comments, \"Perhaps Rommel did not know or care about the specifics; perhaps his motivation was not hate but dispassionate efficiency. The distinctions would have escaped the men hanging from hooks.\"", "title": "Style as military commander" }, { "paragraph_id": 99, "text": "In his article Im Rücken Rommels. Kriegsverbrechen, koloniale Massengewalt und Judenverfolgung in Nordafrika, Bernhard writes that North African campaign was hardly \"war without hate\" as Rommel described it, and points out rapes of women, ill treatment and executions of captured POWs, as well as racially motivated murders of Arabs, Berbers and Jews, in addition to establishment of concentration camps. Bernhard again cites discussion among the German and Italian authorities about Rommel's position regarding countermeasures against local insurrection (according to them, Rommel wanted to eliminate the danger at all costs) to show that Rommel fundamentally approved of Italian policy in the matter. Bernhard opines that Rommel had informal power over the matter because his military success brought him influence on the Italian authorities.", "title": "Style as military commander" }, { "paragraph_id": 100, "text": "United States Holocaust Memorial Museum describes relationship between Rommel and the proposed Einsatzgruppen Egypt as \"problematic\". The Museum states that this unit was to be tasked with murdering Jewish population of North Africa, Palestine, and it was to be attached directly to Rommel's Afrika Korps. According to the museum Rauff met with Rommel's staff in 1942 as part of preparations for this plan. The Museum states that Rommel was certainly aware that planning was taking place, even if his reaction to it isn't recorded, and while the main proposed Einsatzgruppen were never set in action, smaller units did murder Jews in North Africa.", "title": "Style as military commander" }, { "paragraph_id": 101, "text": "On the other hand, Christopher Gabel remarks that Richards Evans seems to attempt to prove that Rommel was a war criminal by association but fails to produce evidence that he had actual or constructive knowledge about said crimes. Ben H. Shepherd comments that Rommel showed insight and restraint when dealing with the nomadic Arabs, the only civilians who occasionally intervened into the war and thus risked reprisals as a result. Shepherd cites a request by Rommel to the Italian High Command, in which he complained about excesses against the Arabic population and noted that reprisals without identifying the real culprits were never expedient.", "title": "Style as military commander" }, { "paragraph_id": 102, "text": "The documentary Rommel's War (Rommels Krieg), made by Caron and Müllner with advice from Sönke Neitzel, states that even though it is not clear whether Rommel knew about the crimes (in Africa) or not, \"his military success made possible forced labor, torture and robbery. Rommel's war is always part of Hitler's war of worldviews, whether Rommel wanted it or not.\" More specifically, several German historians have revealed existence of plans to exterminate Jews in Egypt and Palestine, if Rommel had succeeded in his goal of invading the Middle East during 1942 by SS unit embedded to Afrika Korps.", "title": "Style as military commander" }, { "paragraph_id": 103, "text": "According to Mallmann and Cüppers, a post-war CIA report described Rommel as having met with Walther Rauff, who was responsible for the unit, and been disgusted after learning about the plan from him and as having sent him on his way; but they conclude that such a meeting is hardly possible as Rauff was sent to report to Rommel at Tobruk on 20 July and Rommel was then 500 km away conducting the First El Alamein. On 29 July, Rauff's unit was sent to Athens, expecting to enter Africa when Rommel crossed the Nile. However, in view of the Axis' deteriorating situation in Africa it returned to Germany in September.", "title": "Style as military commander" }, { "paragraph_id": 104, "text": "Historian Jean-Christoph Caron opines that there is no evidence that Rommel knew or would have supported Rauff's mission; he also believes Rommel bore no direct responsibility regarding the SS's looting of gold in Tunisia. Historian Haim Saadon, Director of the Center of Research on North African Jewry in WWII, goes further, stating that there was no extermination plan: Rauff's documents show that his foremost concern was helping the Wehrmacht to win, and he came up with the idea of forced labour camps in the process. By the time these labour camps were in operation, according to Ben H. Shepherd, Rommel had already been retreating and there is no proof of his contact with the Einsatzkommando.", "title": "Style as military commander" }, { "paragraph_id": 105, "text": "Haaretz comments that the CIA report is most likely correct regarding both the interaction between Rommel and Rauff and Rommel's objections to the plan: Rauff's assistant Theodor Saevecke, and declassified information from Rauff's file, both report the same story. Haaretz also remarks that Rommel's influence probably softened the Nazi authorities' attitude to the Jews and to the civilian population generally in North Africa.", "title": "Style as military commander" }, { "paragraph_id": 106, "text": "Rolf-Dieter Müller comments that the war in North Africa, while as bloody as any other war, differed considerably from the war of annihilation in eastern Europe, because it was limited to a narrow coastline and hardly affected the population.", "title": "Style as military commander" }, { "paragraph_id": 107, "text": "Showalter writes that:", "title": "Style as military commander" }, { "paragraph_id": 108, "text": "From the desert campaign’s beginning, both sides consciously sought to wage a \"clean\" war—war without hate, as Rommel put it in his reflections. Explanations include the absence of civilians and the relative absence of Nazis; the nature of the environment, which conveyed a \"moral simplicity and transparency\"; and the control of command on both sides by prewar professionals, producing a British tendency to depict war in the imagery of a game, and the corresponding German pattern of seeing it as a test of skill and a proof of virtue. The nature of the fighting as well diminished the last-ditch, close-quarter actions that are primary nurturers of mutual bitterness. A battalion overrun by tanks usually had its resistance broken so completely that nothing was to be gained by a broken-backed final stand.", "title": "Style as military commander" }, { "paragraph_id": 109, "text": "Joachim Käppner writes that while the conflict in North Africa was not as bloody as in Eastern Europe, the Afrika Korps committed some war crimes. Historian Martin Kitchen states that the reputation of the Afrika Korps was preserved by circumstances: The sparsely populated desert areas did not lend themselves to ethnic cleansing; the German forces never reached the large Jewish populations in Egypt and Palestine; and in the urban areas of Tunisia and Tripolitania the Italian government constrained the German efforts to discriminate against or eliminate Jews who were Italian citizens. Despite this, the North African Jews themselves believed that it was Rommel who prevented the \"Final Solution\" from being carried out against them when German might dominated North Africa from Egypt to Morocco. According to Curtis and Remy, 120,000 Jews lived in Algeria, 200,000 in Morocco, about 80,000 in Tunisia. Remy writes that this number was unchanged following the German invasion of Tunisia in 1942 while Curtis notes that 5000 of these Jews would be sent to forced labour camps. and 26,000 in Libya.", "title": "Style as military commander" }, { "paragraph_id": 110, "text": "Hein Klemann writes that the confiscations in the \"foraging zone\" of Afrika Korps threatened the survival chances of local civilians, just as plunder enacted by Wehrmacht in Soviet Union.", "title": "Style as military commander" }, { "paragraph_id": 111, "text": "In North Africa Rommel's troops laid down landmines, which in decades to come killed and maimed thousands of civilians. Since statistics started in 1980s, 3,300 people have lost their lives, and 7,500 maimed There are disputed whether the landmines in El Alamein, which constitute the most notable portion of landmines left over from World War II, were left by the Afrika Korps or the British Army led by Field Marshal Montgomery. Egypt has not joined the Mine Ban Treaty until this day.", "title": "Style as military commander" }, { "paragraph_id": 112, "text": "Rommel sharply protested the Jewish policies and other immoralities and was an opponent of the Gestapo He also refused to comply with Hitler's order to execute Jewish POWs. Bryan Mark Rigg writes: \"The only place in the army where one might find a place of refuge was in the Deutsches Afrika-Korps (DAK) under the leadership of the 'Desert Fox,' Field Marshal Erwin Rommel. According to this study's files, his half-Jews were not as affected by the racial laws as most others serving on the European continent.\" He notes, though, that \"Perhaps Rommel failed to enforce the order to discharge half-Jews because he was unaware of it\".", "title": "Style as military commander" }, { "paragraph_id": 113, "text": "Captain Horst van Oppenfeld (a staff officer to Colonel Claus von Stauffenberg and a quarter-Jew) says that Rommel did not concern himself with the racial decrees and he had never experienced any trouble caused by his ancestry during his time in the DAK even if Rommel never personally interfered on his behalf. Another quarter-Jew, Fritz Bayerlein, became a famous general and Rommel's chief-of-staff, despite also being a bisexual, which made his situation even more precarious.", "title": "Style as military commander" }, { "paragraph_id": 114, "text": "Building the Atlantic Wall was officially the responsibility of the Organisation Todt, which was not under Rommel's command, but he enthusiastically joined the task, protesting slave labour and suggesting that they should recruit French civilians and pay them good wages. Despite this, French civilians and Italian prisoners of war held by the Germans were forced by officials under the Vichy government, the Todt Organization and the SS forces to work on building some of the defences Rommel requested, in appalling conditions according to historian Will Fowler. Although they got basic wages, the workers complained because it was too little and there was no heavy equipment.", "title": "Style as military commander" }, { "paragraph_id": 115, "text": "German troops worked almost round-the-clock under very harsh conditions, with Rommel's rewards being accordions.", "title": "Style as military commander" }, { "paragraph_id": 116, "text": "Rommel was one of the commanders who protested the Oradour-sur-Glane massacre.", "title": "Style as military commander" }, { "paragraph_id": 117, "text": "Rommel was famous in his lifetime, including among his adversaries. His tactical prowess and decency in the treatment of Allied prisoners earned him the respect of opponents including Claude Auchinleck, Archibald Wavell, George S. Patton, and Bernard Montgomery.", "title": "Reputation as a military commander" }, { "paragraph_id": 118, "text": "Rommel's military reputation has been controversial. While nearly all military practitioners acknowledge Rommel's excellent tactical skills and personal bravery, some, such as U.S. major general and military historian David T. Zabecki of the United States Naval Institute, considers Rommel's performance as an operational level commander to be highly overrated and that other officers share this belief. General Klaus Naumann, who served as Chief of Staff of the Bundeswehr, agrees with the military historian Charles Messenger that Rommel had challenges at the operational level, and states that Rommel's violation of the unity of command principle, bypassing the chain of command in Africa, was unacceptable and contributed to the eventual operational and strategic failure in North Africa. The German biographer Wolf Heckmann describes Rommel as \"the most overrated commander of an army in world history\".", "title": "Reputation as a military commander" }, { "paragraph_id": 119, "text": "Nevertheless, there is also a notable number of officers who admire his methods, like Norman Schwarzkopf who described Rommel as a genius at battles of movement saying \"Look at Rommel. Look at North Africa, the Arab-Israeli wars, and all the rest of them. A war in the desert is a war of mobility and lethality. It's not a war where straight lines are drawn in the sand and [you] say, 'I will defend here or die.\" Ariel Sharon deemed the German military model used by Rommel to be superior to the British model used by Montgomery. His compatriot Moshe Dayan likewise considered Rommel a model and icon. Wesley Clark states that \"Rommel's military reputation, though, has lived on, and still sets the standard for a style of daring, charismatic leadership to which most officers aspire.\" During the recent desert wars, Rommel's military theories and experiences attracted great interest from policy makers and military instructors. Chinese military leader Sun Li-jen had the laudatory nickname \"Rommel of the East\". Certain modern military historians, such as Larry T. Addington, Niall Barr, Douglas Porch and Robert Citino, are sceptical of Rommel as an operational, let alone strategic level commander. They point to Rommel's lack of appreciation for Germany's strategic situation, his misunderstanding of the relative importance of his theatre to the German High Command, his poor grasp of logistical realities, and, according to the historian Ian Beckett, his \"penchant for glory hunting\". Citino credits Rommel's limitations as an operational level commander as \"materially contributing\" to the eventual demise of the Axis forces in North Africa, while Addington focuses on the struggle over strategy, whereby Rommel's initial brilliant success resulted in \"catastrophic effects\" for Germany in North Africa. Porch highlights Rommel's \"offensive mentality\", symptomatic of the Wehrmacht commanders as a whole in the belief that the tactical and operational victories would lead to strategic success. Compounding the problem was the Wehrmacht's institutional tendency to discount logistics, industrial output and their opponents' capacity to learn from past mistakes.", "title": "Reputation as a military commander" }, { "paragraph_id": 120, "text": "The historian Geoffrey P. Megargee points out Rommel's playing the German and Italian command structures against each other to his advantage. Rommel used the confused structure—the High command of the armed forces, the OKH (Supreme High Command of the Army) and the Comando Supremo (Italian Supreme Command)—to disregard orders that he disagreed with or to appeal to whatever authority he felt would be most sympathetic to his requests.", "title": "Reputation as a military commander" }, { "paragraph_id": 121, "text": "Some historians take issue with Rommel's absence from Normandy on the day of the Allied invasion, 6 June 1944. He had left France on 5 June and was at home on the 6th celebrating his wife's birthday. (According to Rommel, he planned to proceed to see Hitler the next day to discuss the situation in Normandy). Zabecki calls his decision to leave the theatre in view of an imminent invasion \"an incredible lapse of command responsibility\". Lieb remarks that Rommel displayed real mental agility, but the lack of an energetic commander, together with other problems, caused the battle largely not to be conducted in his concept (which is the opposite of the German doctrine), although the result was still better than Geyr's plan. Lieb also opines that while his harshest critics (who mostly came from the General Staff) often said that Rommel was overrated or not suitable for higher commands, envy was a big factor here.", "title": "Reputation as a military commander" }, { "paragraph_id": 122, "text": "T.L. McMahon argues that while Rommel no doubt possessed operational vision, he did not have the strategic resources to effect his operational choices while his forces provided the tactical ability to accomplish his goals, and the German staff and system of staff command were designed for commanders who led from the front, and in some cases he might have chosen the same options as Montgomery (a reputedly strategy-oriented commander) had he been put in the same conditions. According to Steven Zaloga, tactical flexibility was a great advantage of the German system, but in the final years of the war, Hitler and his cronies like Himmler and Goering had usurped more and more authority at the strategic level, leaving professionals like Rommel increasing constraints on their actions. Martin Blumenson considers Rommel a general with a compelling view of strategy and logistics, which was demonstrated through his many arguments with his superiors over such matters, although Blumenson also thinks that what distinguished Rommel was his boldness, his intuitive feel for the battlefield.(Upon which Schwarzkopf also comments \"Rommel had a feel for the battlefield like no other man.\")", "title": "Reputation as a military commander" }, { "paragraph_id": 123, "text": "Joseph Forbes comments that: \"The complex, conflict-filled interaction between Rommel and his superiors over logistics, objectives and priorities should not be used to detract from Rommel's reputation as a remarkable military leader\", because Rommel was not given powers over logistics, and because if only generals who attain strategic-policy goals are great generals, such highly regarded commanders as Robert E. Lee, Hannibal, Charles XII would have to be excluded from that list. General Siegfried F. Storbeck, Deputy Inspector General of the Bundeswehr (1987–1991), remarks that, Rommel's leadership style and offensive thinking, although carrying inherent risks like losing the overview of the situation and creating overlapping of authority, have been proved effective, and have been analysed and incorporated in the training of officers by \"us, our Western allies, the Warsaw Pact, and even the Israel Defense Forces\". Maurice Remy defends his strategic decision regarding Malta as, although risky, the only logical choice.", "title": "Reputation as a military commander" }, { "paragraph_id": 124, "text": "Rommel was among the few Axis commanders (the others being Isoroku Yamamoto and Reinhard Heydrich) who were targeted for assassination by Allied planners. Two attempts were made, the first being Operation Flipper in North Africa in 1941, and the second being Operation Gaff in Normandy in 1944.", "title": "Reputation as a military commander" }, { "paragraph_id": 125, "text": "Research by Norman Ohler claims that Rommel's behaviours were heavily influenced by Pervitin which he reportedly took in heavy doses, to such an extent that Ohler refers to him as \"the Crystal Fox\" (\"Kristallfuchs\") – playing off the nickname \"Desert Fox\" famously given to him by the British.", "title": "Reputation as a military commander" }, { "paragraph_id": 126, "text": "In France, Rommel ordered the execution of one French officer who refused three times to cooperate when being taken prisoner; there are disputes as to whether this execution was justified. Caddick-Adams comments that this would make Rommel a war criminal condemned by his own hand, and that other authors overlook this episode. Butler notes that the officer refused to surrender three times and thus died in a courageous but foolhardy way. French historian Petitfrère remarks that Rommel was in a hurry and had no time for useless palavers, although this act was still debatable. Telp remarks that, \"he treated prisoners of war with consideration. On one occasion, he was forced to order the shooting of a French lieutenant-colonel for refusing to obey his captors.\" Scheck says, \"Although there is no evidence incriminating Rommel himself, his unit did fight in areas where German massacres of black French prisoners of war were extremely common in June 1940.\"", "title": "Debate about atrocities" }, { "paragraph_id": 127, "text": "There are reports that during the fighting in France, Rommel's 7th Panzer Division committed atrocities against surrendering French troops and captured prisoners of war. The atrocities, according to Martin S. Alexander, included the murder of 50 surrendering officers and men at Quesnoy and the nearby Airaines. According to Richardot, on 7 June, the commanding French officer Charles N'Tchoréré and his company surrendered to the 7th Panzer Division. He was then executed by the 25th Infantry Regiment (the 7th Panzer Division did not have a 25th Infantry Regiment). Journalist Alain Aka states simply that he was executed by one of Rommel's soldiers and his body was driven over by tank. Erwan Bergot reports that he was killed by the SS. Historian John Morrow states he was shot in the neck by a Panzer officer, without mentioning the unit of the perpetrators of this crime. The website of the National Federation of Volunteer Servicemen (F.N.C.V., France) states that N'Tchoréré was pushed against the wall and, despite protests from his comrades and newly liberated German prisoners, was shot by the SS.", "title": "Debate about atrocities" }, { "paragraph_id": 128, "text": "Elements of the division are considered by Scheck to have been \"likely\" responsible for the murder of POWs in Hangest-sur-Somme, while Scheck reports that they were too far away to have been involved in the massacres at Airaines and nearby villages. Scheck says that the German units fighting there came from the 46th and 2nd Infantry Division, and possibly from the 6th and 27th Infantry Division as well. Scheck also writes that there were no SS units in the area. Morrow, citing Scheck, says that the 7th Panzer Division carried out \"cleansing operations\". French historian Dominique Lormier counts the number of victims of the 7th Panzer Division in Airaines at 109, mostly French-African soldiers from Senegal. Showalter writes:", "title": "Debate about atrocities" }, { "paragraph_id": 129, "text": "In fact, the garrison of Le Quesnoy, most of them Senegalese, took heavy toll of the German infantry in house-to-house fighting. Unlike other occasions in 1940, when Germans and Africans met, there was no deliberate massacre of survivors. Nevertheless, the riflemen took few prisoners, and the delay imposed by the tirailleurs forced the Panzers to advance unsupported until Rommel was ordered to halt for fear of coming under attack by Stukas.", "title": "Debate about atrocities" }, { "paragraph_id": 130, "text": "Claus Telp comments that Airaines was not in the sector of the 7th, but at Hangest and Martainville, elements of the 7th might have shot some prisoners and used British Colonel Broomhall as a human shield (although Telp is of the opinion that it was unlikely that Rommel approved of, or even knew about, these two incidents). Historian David Stone notes that acts of shooting surrendered prisoners were carried out by Rommel's 7th Panzer Division and observes contradictory statements in Rommel's account of the events; Rommel initially wrote that \"any enemy troops were wiped out or forced to withdraw\" but also added that \"many prisoners taken were hopelessly drunk.\" Stone attributes the massacres of soldiers from the 53ème Regiment d'Infanterie Coloniale (N'Tchoréré's unit) on 7 June to the 5th Infantry Division. Historian Daniel Butler agrees that it was possible that the massacre at Le Quesnoy happened given the existence of Nazis, such as Hanke, in Rommel's division, while stating that in comparison with other German units, few sources regarding such actions of the men of the 7th Panzer exist. Butler believes that \"it's almost impossible to imagine\" Rommel authorising or countenancing such actions. He also writes that", "title": "Debate about atrocities" }, { "paragraph_id": 131, "text": "Some accusers have twisted a remark in Rommel's own account of the action in the village of Le Quesnoy as proof that he at least tacitly condoned the executions—'any enemy troops were either wiped out or forced to withdraw'—but the words themselves as well as the context of the passage hardly support the contention.", "title": "Debate about atrocities" }, { "paragraph_id": 132, "text": "Giordana Terracina writes that: \"On April 3, the Italians recaptured Benghazi and a few months later the Afrika Korps led by Rommel was sent to Libya and began the deportation of the Jews of Cyrenaica in the concentration camp of Giado and other smaller towns in Tripolitania. This measure was accompanied by shooting, also in Benghazi, of some Jews guilty of having welcomed the British troops, on their arrival, treating them as liberators.\" Gershom states that Italian authorities were responsible for bringing Jews into their concentration camps, which were \"not built to exterminate its inmates\", yet as the water and food supply was meager, were not built to keep humans alive either. Also according to Gershom, the German consul in Tripoli knew about the process and trucks used to transport supply to Rommel were sometimes used to transport Jews, despite all problems the German forces were having. The Jerusalem Post's review of Gershom Gorenberg's War of shadows writes that: \"The Italians were far more brutal with civilians, including Libyan Jews, than Rommel’s Afrika Korps, which by all accounts abided by the laws of war. But nobody worried that the Italians who sent Jews to concentration camps in Libya, would invade British-held Egypt, let alone Mandatory Palestine.\"", "title": "Debate about atrocities" }, { "paragraph_id": 133, "text": "According to German historian Wolfgang Proske [de], Rommel forbade his soldiers from buying anything from the Jewish population of Tripoli, used Jewish slave labour and commanded Jews to clear out minefields by walking on them ahead of his forces. According to Proske, some of the Libyan Jews were eventually sent to concentration camps. Historians Christian Schweizer and Peter Lieb note that: \"Over the last few years, even though the social science teacher Wolfgang Proske has sought to participate in the discussion [on Rommel] with very strong opinions, his biased submissions are not scientifically received.\" The Heidenheimer Zeitung notes that Proske was the publisher of his main work Täter, Helfer, Trittbrettfahrer – NS-Belastete von der Ostalb, after failing to have it published by another publisher.", "title": "Debate about atrocities" }, { "paragraph_id": 134, "text": "According to historian Michael Wolffsohn, during the Africa campaign, preparations for committing genocide against the North African Jews were in full swing and a thousand of them were transported to East European concentration camps. At the same time, he recommends the Bundeswehr to keep the names and traditions associated with Rommel (although Wolffsohn opines that focus should be put on the politically thoughtful soldier he became at the end of his life, rather than the swashbuckler and the humane rogue).", "title": "Debate about atrocities" }, { "paragraph_id": 135, "text": "Robert Satloff writes in his book Among the Righteous: Lost Stories from the Holocaust's Long Reach into Arab Lands that as the German and Italian forces retreated across Libya towards Tunisia, the Jewish population became victim upon which they released their anger and frustration. According to Satloff Afrika Korps soldiers plundered Jewish property all along the Libyan coast. This violence and persecution only came to an end with the arrival of General Montgomery in Tripoli on 23 January 1943. According to Maurice Remy, although there were antisemitic individuals in the Afrika Korps, actual cases of abuse are not known, even against the Jewish soldiers of the Eighth Army. Remy quotes Isaac Levy, the Senior Jewish Chaplain of the Eighth Army, as saying that he had never seen \"any sign or hint that the soldiers [of the Afrika Korps] are antisemitic.\". The Telegraph comments: \"Accounts suggest that it was not Field Marshal Erwin Rommel but the ruthless SS colonel Walter Rauff who stripped Tunisian Jews of their wealth.\"", "title": "Debate about atrocities" }, { "paragraph_id": 136, "text": "Commenting on Rommel's conquest of Tunisia, Marvin Perry writes that: \"The bridgehead Rommel established in Tunisia enabled the SS to herd Jews into slave labor camps.\"", "title": "Debate about atrocities" }, { "paragraph_id": 137, "text": "Der Spiegel writes that: \"The SS had established a network of labor camps in Tunisia. More than 2,500 Tunisian Jews died in six months of German rule, and the regular army was also involved in executions.\" Caron writes on Der Spiegel that the camps were organised in early December 1942 by Nehring, the commander in Tunisia, and Rauff, while Rommel was retreating. As commander of the German Afrika Korps, Nehring would continue to use Tunisian forced labour. According to Caddick-Adams, no Waffen-SS served under Rommel in Africa at any time and most of the activities of Rauff's detachment happened after Rommel's departure. Shepherd notes that during this time Rommel was retreating and there is no evidence that he had contact with the Einsatzkommando. Addressing the call of some authors to contextualise Rommel's actions in Italy and North Africa, Wolfgang Mährle notes that while it is undeniable that Rommel played the role of a Generalfeldmarschall in a criminal war, this only illustrates in a limited way his personal attitude and the actions resulted from that.", "title": "Debate about atrocities" }, { "paragraph_id": 138, "text": "According to several historians, allegations and stories that associate Rommel and the Afrika Korps with the harassing and plundering of Jewish gold and property in Tunisia are usually known under the name \"Rommel's treasure\" or \"Rommel's gold\". Michael FitzGerald comments that the treasure should be named more accurately as Rauff's gold, as Rommel had nothing to do with its acquisition or removal. Jean-Christoph Caron comments that the treasure legend has a real core and that Jewish property was looted by the SS in Tunisia and later might have been hidden or sunken around the port city of Corsica, where Rauff was stationed in 1943. The person who gave birth to the full-blown legend was the SS soldier Walter Kirner, who presented a false map to the French authorities. Caron and Jörg Müllner, his co-author of the ZDF documentary Rommel's treasure (Rommels Schatz) tell Die Welt that \"Rommel had nothing to do with the treasure, but his name is assocỉated with everything that happened in the war in Africa.\"", "title": "Debate about atrocities" }, { "paragraph_id": 139, "text": "Rick Atkinson criticises Rommel for gaining a looted stamp collection (a bribe from Sepp Dietrich) and a villa taken from Jews. Lucas, Matthews and Remy though describe the contemptuous and angry reaction of Rommel towards Dietrich's act and the lootings and other brutal behaviours of the SS that he had discovered in Italy. Claudia Hecht also explains that although the Stuttgart and Ulm authorities did arrange for the Rommel family to use a villa whose Jewish owners had been forced out two years earlier, for a brief period after their own house had been destroyed by Allied bombing, ownership of it was never transferred to them. Butler notes that Rommel was one of the few who refused large estates and gifts of cash Hitler gave to his generals.", "title": "Debate about atrocities" }, { "paragraph_id": 140, "text": "At the beginning, although Hitler and Goebbels took particular notice of Rommel, the Nazi elites had no intent to create one major war symbol (partly out of fear that he would offset Hitler), generating huge propaganda campaigns for not only Rommel but also Gerd von Rundstedt, Walther von Brauchitsch, Eduard Dietl, Sepp Dietrich (the latter two were party members and also strongly supported by Hitler), etc. Nevertheless, a multitude of factors—including Rommel's unusual charisma, his talents both in military matters and public relations,, the efforts of Goebbels's propaganda machine, and the Allies' participation in mythologising his life (either for political benefits, sympathy for someone who evoked a romantic archetype, or genuine admiration for his actions)—gradually contributed to Rommel's fame. Spiegel wrote, \"Even back then his fame outshone that of all other commanders.\"", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 141, "text": "Rommel's victories in France were featured in the German press and in the February 1941 film Sieg im Westen (Victory in the West), in which Rommel personally helped direct a segment re-enacting the crossing of the Somme River.According to Scheck, although there is no evidence of Rommel committing crimes, during the shooting of the movie, African prisoners of war, were forced to take part in its making, and forced to carry out humiliating acts. Stills from the re-enactment are found in \"Rommel Collection\"; it was filmed by Hans Ertl, assigned to this task by Dr. Kurt Hesse, a personal friend of Rommel, who worked for Wehrmacht Propaganda Section V Rommel's victories in 1941 were played up by the Nazi propaganda, even though his successes in North Africa were achieved in arguably one of Germany's least strategically important theatres of World War II. In November 1941, Reich Minister of Propaganda Joseph Goebbels wrote about \"the urgent need\" to have Rommel \"elevated to a kind of popular hero.\" Rommel, with his innate abilities as a military commander and love of the spotlight, was a perfect fit for the role Goebbels designed for him.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 142, "text": "In North Africa, Rommel received help in cultivating his image from Alfred Ingemar Berndt, a senior official at the Reich Propaganda Ministry who had volunteered for military service. Seconded by Goebbels, Berndt was assigned to Rommel's staff and became one of his closest aides. Berndt often acted as liaison between Rommel, the Propaganda Ministry, and the Führer Headquarters. He directed Rommel's photo shoots and filed radio dispatches describing the battles.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 143, "text": "In the spring of 1941, Rommel's name began to appear in the British media. In the autumn of 1941 and early winter of 1941/1942, he was mentioned in the British press almost daily. Toward the end of the year, the Reich propaganda machine also used Rommel's successes in Africa as a diversion from the Wehrmacht's challenging situation in the Soviet Union with the stall of Operation Barbarossa. The American press soon began to take notice of Rommel as well, following the country's entry into the war on 11 December 1941, writing that \"The British (...) admire him because he beat them and were surprised to have beaten in turn such a capable general.\" General Auchinleck distributed a directive to his commanders seeking to dispel the notion that Rommel was a \"superman\". Rommel, no matter how hard the situation was, made a deliberate effort at always spending some time with soldiers and patients, his own and POWs alike, which contributed greatly to his reputation of not only being a great commander but also \"a decent chap\" among the troops.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 144, "text": "The attention of the Western and especially the British press thrilled Goebbels, who wrote in his diary in early 1942: \"Rommel continues to be the recognized darling of even the enemies' news agencies.\" The Field Marshal was pleased by the media attention, although he knew the downsides of having a reputation. Hitler took note of the British propaganda as well, commenting in the summer of 1942 that Britain's leaders must have hoped \"to be able to explain their defeat to their own nation more easily by focusing on Rommel\".", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 145, "text": "The Field Marshal was the German commander most frequently covered in the German media, and the only one to be given a press conference, which took place in October 1942. The press conference was moderated by Goebbels and was attended by both domestic and foreign media. Rommel declared: \"Today we (...) have the gates of Egypt in hand, and with the intent to act!\" Keeping the focus on Rommel distracted the German public from Wehrmacht losses elsewhere as the tide of the war began to turn. He became a symbol that was used to reinforce the German public's faith in an ultimate Axis victory.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 146, "text": "In the wake of the successful British offensive in November 1942 and other military reverses, the Propaganda Ministry directed the media to emphasise Rommel's invincibility. The charade was maintained until the spring of 1943, even as the German situation in Africa became increasingly precarious. To ensure that the inevitable defeat in Africa would not be associated with Rommel's name, Goebbels had the Army High Command announce in May 1943 that Rommel was on a two-month leave for health reasons. Instead, the campaign was presented by Berndt, who resumed his role in the Propaganda Ministry, as a ruse to tie down the British Empire while Germany was turning Europe into an impenetrable fortress with Rommel at the helm of this success. After the radio programme ran in May 1943, Rommel sent Berndt a case of cigars as a sign of his gratitude.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 147, "text": "Although Rommel then entered a period without a significant command, he remained a household name in Germany, synonymous with the aura of invincibility. Hitler then made Rommel part of his defensive strategy for Fortress Europe (Festung Europa) by sending him to the West to inspect fortifications along the Atlantic Wall. Goebbels supported the decision, noting in his diary that Rommel was \"undoubtedly the suitable man\" for the task. The propaganda minister expected the move to reassure the German public and at the same time to have a negative impact on the Allied forces' morale.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 148, "text": "In France, a Wehrmacht propaganda company frequently accompanied Rommel on his inspection trips to document his work for both domestic and foreign audiences. In May 1944 the German newsreels reported on Rommel's speech at a Wehrmacht conference, where he stated his conviction that \"every single German soldier will make his contribution against the Anglo-American spirit that it deserves for its criminal and bestial air war campaign against our homeland.\" The speech led to an upswing in morale and sustained confidence in Rommel.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 149, "text": "When Rommel was seriously wounded on 17 July 1944, the Propaganda Ministry undertook efforts to conceal the injury so as not to undermine domestic morale. Despite those, the news leaked to the British press. To counteract the rumours of a serious injury and even death, Rommel was required to appear at 1 August press conference. On 3 August, the German press published an official report that Rommel had been injured in a car accident. Rommel noted in his diary his dismay at this twisting of the truth, belatedly realising how much the Reich propaganda was using him for its own ends.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 150, "text": "Rommel was interested in propaganda beyond the promotion of his own image. In 1944, after visiting Rommel in France and reading his proposals on counteracting Allied propaganda, Alfred-Ingemar Berndt remarked: \"He is also interested in this propaganda business and wants to develop it by all means. He has even thought and brought out practical suggestions for each program and subject.\"", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 151, "text": "Rommel saw the propaganda and education values in his and his nation's deeds (He also did value justice itself; according to Admiral Ruge's diary, Rommel told Ruge: \"Justice is the indispensable foundation of a nation. Unfortunately, the higher-ups are not clean. The slaughterings are grave sins.\") The key to the successful creating of an image, according to Rommel, was leading by example:", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 152, "text": "The men tend to feel no kind of contact with a commander who, they know, is sitting somewhere in headquarters. What they want is what might be termed a physical contact with him. In moments of panic, fatigue, or disorganization, or when something out of the ordinary has to be demanded from them, the personal example of the commander works wonders, especially if he has had the wit to create some sort of legend around himself.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 153, "text": "He urged Axis authorities to treat the Arab with the utmost respect to prevent uprisings behind the front. He protested the use of propaganda at the cost of explicit military benefits though, criticising Hitler's headquarters for being unable to tell the German people and the world that El Alamein had been lost and preventing the evacuation of the German forces in Northern Africa in the process. Ruge suggests that his chief treated his own fame as a kind of weapon.", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 154, "text": "In 1943, he surprised Hitler by proposing that a Jew should be made into a Gauleiter to prove to the world that Germany was innocent of accusations that Rommel had heard from the enemy's propaganda regarding the mistreatment of Jews. Hitler replied, \"Dear Rommel, you understand nothing about my thinking at all.\"", "title": "In Nazi and Allied propaganda" }, { "paragraph_id": 155, "text": "Rommel was not a member of the Nazi Party. Rommel and Hitler had a close and genuine, if complicated, personal relationship. Rommel, as other Wehrmacht officers, welcomed the Nazi rise to power. Numerous historians state that Rommel was one of Hitler's favourite generals and that his close relationship with the dictator benefited both his inter-war and war-time career. Robert Citino describes Rommel as \"not apolitical\" and writes that he owed his career to Hitler, to whom Rommel's attitude was \"worshipful\", with Messenger agreeing that Rommel owed his tank command, his hero status and other promotions to Hitler's interference and support.", "title": "Relationship with Nazism" }, { "paragraph_id": 156, "text": "Kesselring described Rommel's own power over Hitler as \"hypnotic\". In 1944, Rommel himself told Ruge and his wife that Hitler had a kind of irresistible magnetic aura (\"Magnetismus\") and was always seemingly in an intoxicated condition. Maurice Remy identifies that the point at which their relationship became a personal one was 1939, when Rommel proudly announced to his friend Kurt Hesse that he had \"sort of forced Hitler to go with me (to the Hradschin Castle in Prague, in an open top car, without another bodyguard), under my personal protection ... He had entrusted himself to me and would never forget me for my excellent advice.\"", "title": "Relationship with Nazism" }, { "paragraph_id": 157, "text": "The close relationship between Rommel and Hitler continued following the Western campaign; after Rommel sent to him a specially prepared diary on the 7th Division, he received a letter of thanks from the dictator. (According to Speer, he would normally send extremely unclear reports which annoyed Hitler greatly.) According to Maurice Remy, the relationship, which Remy calls \"a dream marriage\", showed the first crack only in 1942, and later gradually turned into, in the words of German writer Ernst Jünger (in contact with Rommel in Normandy), \"Haßliebe\" (a love-hate relationship). Ruge's diary and Rommel's letters to his wife show his mood fluctuating wildly regarding Hitler: while he showed disgust towards the atrocities and disappointment towards the situation, he was overjoyed to welcome a visit from Hitler, only to return to depression the next day when faced with reality.", "title": "Relationship with Nazism" }, { "paragraph_id": 158, "text": "Hitler displayed the same emotions. Amid growing doubts and differences, he would remain eager for Rommel's calls (they had almost daily, hour-long, highly animated conversations, with the preferred topic being technical innovations): he once almost grabbed the telephone out of Linge's hand. But, according to Linge, seeing Rommel's disobedience Hitler also realised his mistake in building up Rommel, whom not only the Afrika Korps but also the German people in general now considered the German God. Hitler tried to fix the dysfunctional relationship many times without results, with Rommel calling his attempts \"Sunlamp Treatment\", although later he said that \"Once I have loved the Führer, and I still do.\" Remy and Der Spiegel remark that the statement was very much genuine, while Watson notes that Rommel believed he deserved to die for his treasonable plan.", "title": "Relationship with Nazism" }, { "paragraph_id": 159, "text": "Rommel was an ambitious man who took advantage of his proximity to Hitler and willingly accepted the propaganda campaigns designed for him by Goebbels. On one hand, he wanted personal promotion and the realisation of his ideals. On the other hand, being elevated by the traditional system that gave preferential treatment to aristocratic officers would be betrayal of his aspiration \"to remain a man of the troops\". In 1918, Rommel refused an invitation to a prestigious officer training course, and with it, the chance to be promoted to general. Additionally, he had no inclination towards the political route, preferring to remain a soldier (\"Nur-Soldat\"). He was thus attracted by the Common Man theme which promised to level German society, the glorification of the national community, and the idea of a soldier of common background who served the Fatherland with talent and got rewarded by another common man who embodied the will of the German people. While he had much indignation towards Germany's contemporary class problem, this self-association with the Common Man went along well with his desire to simulate the knights of the past, who also led from the front. Rommel seemed to enjoy the idea of peace, as shown by his words to his wife in August 1939: \"You can trust me, we have taken part in one World War, but as long as our generation live, there will not be a second\", as well as his letter sent to her the night before the Invasion of Poland, in which he expressed (in Maurice Remy's phrase) \"boundless optimism\": \"I still believe the atmosphere will not become more bellicose.\" Butler remarks that Rommel was center in his politics, leaning a little to the left in his attitude.", "title": "Relationship with Nazism" }, { "paragraph_id": 160, "text": "Messenger argues that Rommel's attitude towards Hitler changed only after the Allied invasion of Normandy, when Rommel came to realise that the war could not be won, while Maurice Remy suggests that Rommel never truly broke away from the relationship with Hitler but praises him for \"always [having] the courage to oppose him whenever his conscience required so\". The historian Peter Lieb states that it was not clear whether the threat of defeat was the only reason Rommel wanted to switch sides. The relationship seemed to go significantly downhill after a conversation in July 1943, in which Hitler told Rommel that if they did not win the war, the Germans could rot. Rommel even began to think that it was lucky that his Afrika Korps was now safe as POWs and could escape Hitler's Wagnerian ending. Die Welt comments that Hitler chose Rommel as his favourite because he was apolitical, and that the combination of his military expertise and circumstances allowed Rommel to remain clean.", "title": "Relationship with Nazism" }, { "paragraph_id": 161, "text": "Rommel's political inclinations were a controversial matter even among the contemporary Nazi elites. Rommel himself, while showing support to some facets of the Nazi ideology and enjoying the propaganda machine that the Nazis had built around him, was enraged by the Nazi media's effort to portray him as an early Party member and son of a mason, forcing them to correct this misinformation. The Nazi elites were not comfortable with the idea of a national icon who did not wholeheartedly support the regime. Hitler and Goebbels, his main supporters, tended to defend him. When Rommel was being considered for appointment as Commander-in-Chief of the Army in the summer of 1942, Goebbels wrote in his diary that Rommel \"is ideologically sound, is not just sympathetic to the National Socialists. He is a National Socialist; he is a troop leader with a gift for improvisation, personally courageous and extraordinarily inventive. These are the kinds of soldiers we need.\" Despite this, they gradually saw that his grasp of political realities and his views could be very different from theirs. Hitler knew, though, that Rommel's optimistic and combative character was indispensable for his war efforts. When Rommel lost faith in the final victory and Hitler's leadership, Hitler and Goebbels tried to find an alternative in Manstein to remedy the fighting will and \"political direction\" of other generals but did not succeed.", "title": "Relationship with Nazism" }, { "paragraph_id": 162, "text": "Meanwhile, officials who did not like Rommel, such as Bormann and Schirach, whispered to each other that he was not a Nazi at all. Rommel's relationship to the Nazi elites, other than Hitler and Goebbels, was mostly hostile, although even powerful people like Bormann and Himmler had to tread carefully around Rommel. Himmler, who played a decisive role in Rommel's death, tried to blame Keitel and Jodl for the deed. And in fact the deed was initiated by them. They deeply resented Rommel's meteoric rise and had long feared that he would become the Commander-in-Chief. (Hitler also played innocent by trying to erect a monument for the national hero, on 7 March 1945) Franz Halder, after concocting several schemes to rein in Rommel through people like Paulus and Gause to no avail (even willing to undermine German operations and strategy in the process for the sole purpose of embarrassing him), concluded that Rommel was a madman with whom no one dared to cross swords because of \"his brutal methods and his backing from the highest levels\". (Rommel imposed a high number of courts martial, but according to Westphal, he never signed the final order. Owen Connelly comments that he could afford easy discipline because of his charisma). Rommel for his part was highly critical of Himmler, Halder, the High Command and particularly Goering who Rommel at one point called his \"bitterest enemy\". Hitler realised that Rommel attracted the elites' negative emotions to himself, in the same way he generated optimism in the common people. Depending on the case, Hitler manipulated or exacerbated the situation in order to benefit himself, although he originally had no intent of pushing Rommel to the point of destruction. (Even when informed of Rommel's involvement in the plot, hurt and vengeful, Hitler at first wanted to retire Rommel, and eventually offered him a last-minute chance to explain himself and refute the claims, which Rommel apparently did not take advantage of.) Ultimately Rommel's enemies worked together to bring him down.", "title": "Relationship with Nazism" }, { "paragraph_id": 163, "text": "Maurice Remy concludes that, unwillingly and probably without ever realising it, Rommel was part of a murderous regime, although he never actually grasped the core of Nazism. Peter Lieb sees Rommel as a person who could not be put into a single drawer, although problematic by modern moral standards, and suggests people should personally decide for themselves whether Rommel should remain a role model or not. He was a Nazi general in some aspects, considering his support for the leader cult (Führerkult) and the Volksgemeinschaft, but he was not an antisemite, nor a war criminal, nor a radical ideological fighter. Historian Cornelia Hecht remarks \"It is really hard to know who the man behind the myth was,\" noting that in numerous letters he wrote to his wife during their almost 30-year marriage, he commented little on political issues as well as his personal life as a husband and a father.", "title": "Relationship with Nazism" }, { "paragraph_id": 164, "text": "According to some revisionist authors, an assessment of Rommel's role in history has been hampered by views of Rommel that were formed, at least in part, for political reasons, creating what these historians have called the \"Rommel myth\". The interpretation considered by some historians to be a myth is the depiction of the Field Marshal as an apolitical, brilliant commander and a victim of Nazi Germany who participated in the 20 July plot against Adolf Hitler. There are a notable number of authors who refer to \"Rommel Myth\" or \"Rommel Legend\" in a neutral or positive manner though. The seeds of the myth can be found first in Rommel's drive for success as a young officer in World War I and then in his popular 1937 book Infantry Attacks, which was written in a style that diverged from the German military literature of the time and became a best-seller.", "title": "Rommel myth" }, { "paragraph_id": 165, "text": "The myth then took shape during the opening years of World War II, as a component of Nazi propaganda to praise the Wehrmacht and instill optimism in the German public, with Rommel's willing participation. When Rommel came to North Africa, it was picked up and disseminated in the West by the British press as the Allies sought to explain their continued inability to defeat the Axis forces in North Africa. The British military and political figures contributed to the heroic image of the man as Rommel resumed offensive operations in January 1942 against the British forces weakened by redeployments to the Far East. During parliamentary debate following the fall of Tobruk, Churchill described Rommel as an \"extraordinary bold and clever opponent\" and a \"great field commander\".", "title": "Rommel myth" }, { "paragraph_id": 166, "text": "According to Der Spiegel following the war's end, West Germany yearned for father figures who were needed to replace the former ones who had been unmasked as criminals. Rommel was chosen because he embodied the decent soldier, cunning yet fair-minded, and if guilty by association, not so guilty that he became unreliable, and additionally, former comrades reported that he was close to the Resistance. While everyone else was disgraced, his star became brighter than ever, and he made the historically unprecedented leap over the threshold between eras: from Hitler's favourite general to the young republic's hero. Cornelia Hecht notes that despite the change of times, Rommel has become the symbol of different regimes and concepts, which is paradoxical, whoever the man he really was.", "title": "Rommel myth" }, { "paragraph_id": 167, "text": "At the same time, the Western Allies, and particularly the British, depicted Rommel as the \"good German\". His reputation for conducting a clean war was used in the interest of the West German rearmament and reconciliation between the former enemies—Britain and the United States on one side and the new Federal Republic of Germany on the other. When Rommel's alleged involvement in the plot to kill Hitler became known after the war, his stature was enhanced in the eyes of his former adversaries. Rommel was often cited in Western sources as a patriotic German willing to stand up to Hitler. Churchill wrote about him in 1950: \"[Rommel] (...) deserves our respect because, although a loyal German soldier, he came to hate Hitler and all his works and took part in the conspiracy of 1944 to rescue Germany by displacing the maniac and tyrant.\"", "title": "Rommel myth" }, { "paragraph_id": 168, "text": "While at Cadet School in 1911, Rommel met and became engaged to 17-year-old Lucia (Lucie) Maria Mollin (1894–1971). While stationed in Weingarten in 1913, Rommel developed a relationship with Walburga Stemmer, which produced a daughter, Gertrud, born 8 December 1913. Because of elitism in the officer corps, Stemmer's working-class background made her unsuitable as an officer's wife, and Rommel felt honour-bound to uphold his previous commitment to Mollin. With Mollin's cooperation, he accepted financial responsibility for the child. Rommel and Mollin were married in November 1916 in Danzig. Rommel's marriage was a happy one, and he wrote his wife at least one letter every day while he was in the field.", "title": "Family life" }, { "paragraph_id": 169, "text": "After the end of the First World War, the couple settled initially in Stuttgart, and Stemmer and her child lived with them. Gertrud was referred to as Rommel's niece, a fiction that went unquestioned because of the enormous number of women widowed during the war. Walburga died suddenly in October 1928, and Gertrud remained a member of the household until Rommel's death in 1944. The incident with Walburga seemed to affect Rommel for the rest of his life: he would always keep women distant. A son, Manfred Rommel, was born on 24 December 1928, later served as Mayor of Stuttgart from 1974 to 1996.", "title": "Family life" }, { "paragraph_id": 170, "text": "The German Army's largest base, the Field Marshal Rommel Barracks, Augustdorf, is named in his honour; at the dedication in 1961 his widow Lucie and son Manfred Rommel were guests of honour. The Rommel Barracks, Dornstadt, was also named for him in 1965. A third base named for him, the Field Marshal Rommel Barracks, Osterode, closed in 2004. The German destroyer Rommel was named for him in 1969 and christened by his widow; the ship was decommissioned in 1998.", "title": "Commemoration" }, { "paragraph_id": 171, "text": "The Rommel Memorial was erected in Heidenheim in 1961. In 2020, a sculpture of a landmine victim was placed next to the Rommel Memorial in Heidenheim. The city mayor Bernhard Ilg comments that, regarding \"the great son of Heidenheim\", \"there are many opinions\". Heidenheim eventually dedicated the Memorial towards a stand against war, militarism and extremism, stating that when the memorial was erected in 1961, statements were added that now are not compatible with modern knowledge about Rommel. The Deutsche Welle notes that the 17 million mines the British, Italian, and German armies left continue to claim lives to this day.", "title": "Commemoration" }, { "paragraph_id": 172, "text": "In Aalen, after a discussion on renaming a street named after him, a new place of commemoration was created, where stelae with information on the lives of Rommel and three opponents of the regime (Eugen Bolz, Friedrich Schwarz and Karl Mikeller) stand together (Rommel's stele is dark blue and rusty red while the others are light-coloured). The History Association of Aalen, together with an independent commission of historians from Düsseldorf, welcomes the keeping of the street's name and notes that Rommel was neither war criminal nor resistance fighter, but perpetrator and victim at the same time – he willingly served as figurehead for the regime, then lately recognised his mistake and paid for that with his life. An education program named \"Erwin Rommel and Aalen\" for school children in Aalen is also established.", "title": "Commemoration" }, { "paragraph_id": 173, "text": "In 2021, the Student Council of the Friedrich-Alexander-University Erlangen-Nürnberg (FAU) decided to change the name of their Süd-Campus (South Campus, Erlangen) into Rommel-Campus, emphasising that the city of Erlangen stands behind the name and the university needs to do the same. The university's branch of the Education and Science Workers' Union (GEW) describes the decision as problematic considering Rommel's history of supporting the Nazi regime militarily and propagandistically.", "title": "Commemoration" }, { "paragraph_id": 174, "text": "Numerous streets in Germany, especially in Rommel's home state of Baden-Württemberg, are named in his honour, including the street near where his last home was located. The Rommel Museum opened in 1989 in the Villa Lindenhof in Herrlingen. The museum now operates under the name Museum Lebenslinien (Lifelines Museum), which presents the lives of Rommel and other notable residents of Herrlingen, including the poet Gertrud Kantorowicz (whose collection is presented together with the Rommel Archive inside a building on a road named after Rommel), the educators Anna Essinger and Hugo Rosenthal. There is also a Rommel Museum in Mersa Matruh in Egypt which opened in 1977, and which is located in one of Rommel's former headquarters; various other localities and establishments in Mersa Matruh, including Rommel Beach, are also named for Rommel. The reason for the naming is that he respected the Bedouins' traditions and the sanctity of their homes (he always kept his troops at least 2 kilometres from their houses) and refused to poison the wells against the Allies, fearing doing so would harm the population.", "title": "Commemoration" }, { "paragraph_id": 175, "text": "In Italy, the annual marathon tour \"Rommel Trail\", which is sponsored by the Protezione Civile and the autonomous region of Friuli Venezia Giulia through its tourism agency, celebrates Rommel and the Battle of Caporetto. The naming and sponsoring (at that time by the center-left PD) was criticised by the politician Giuseppe Civati in 2017.", "title": "Commemoration" }, { "paragraph_id": 176, "text": "Informational notes", "title": "References" }, { "paragraph_id": 177, "text": "Citations", "title": "References" }, { "paragraph_id": 178, "text": "Bibliography", "title": "References" } ]
Johannes Erwin Eugen Rommel was a German Generalfeldmarschall during World War II. Popularly known as the Desert Fox, he served in the Wehrmacht of Nazi Germany, as well as serving in the Reichswehr of the Weimar Republic, and the army of Imperial Germany. Rommel was injured multiple times in both world wars. Rommel was a highly decorated officer in World War I and was awarded the Pour le Mérite for his actions on the Italian Front. In 1937, he published his classic book on military tactics, Infantry Attacks, drawing on his experiences in that war. In World War II, he commanded the 7th Panzer Division during the 1940 invasion of France. His leadership of German and Italian forces in the North African campaign established his reputation as one of the ablest tank commanders of the war, and earned him the nickname der Wüstenfuchs, "the Desert Fox". Among his British adversaries he had a reputation for chivalry, and his phrase "war without hate" has been uncritically used to describe the North African campaign. A number of historians have since rejected the phrase as a myth and uncovered numerous examples of German war crimes and abuses towards enemy soldiers and native populations in Africa during the conflict. Other historians note that there is no clear evidence Rommel was involved or aware of these crimes, with some pointing out that the war in the desert, as fought by Rommel and his opponents, still came as close to a clean fight as there was in World War II. He later commanded the German forces opposing the Allied cross-channel invasion of Normandy in June 1944. With the Nazis gaining power in Germany, Rommel gradually accepted the new regime. Historians have given different accounts of the specific period and his motivations. He was a supporter of Adolf Hitler, at least until near the end of the war, if not necessarily sympathetic to the party and the paramilitary forces associated with it. In 1944, Rommel was implicated in the 20 July plot to assassinate Hitler. Because of Rommel's status as a national hero, Hitler wanted to eliminate him quietly instead of having him immediately executed, as many other plotters were. Rommel was given a choice between suicide, in return for assurances that his reputation would remain intact and that his family would not be persecuted following his death, or facing a trial that would result in his disgrace and execution; he chose the former and took a cyanide pill. Rommel was given a state funeral, and it was announced that he had succumbed to his injuries from the strafing of his staff car in Normandy. Rommel became a larger-than-life figure in both Allied and Nazi propaganda, and in postwar popular culture. Numerous authors portray him as an apolitical, brilliant commander and a victim of Nazi Germany, although this assessment is contested by other authors as the Rommel myth. Rommel's reputation for conducting a clean war was used in the interest of the West German rearmament and reconciliation between the former enemies – the United Kingdom and the United States on one side and the new Federal Republic of Germany on the other. Several of Rommel's former subordinates, notably his chief of staff Hans Speidel, played key roles in German rearmament and integration into NATO in the postwar era. The German Army's largest military base, the Field Marshal Rommel Barracks, Augustdorf, and a third ship of Lütjens-class destroyer of the German Navy are both named in his honour. His son Manfred Rommel was the longtime mayor of Stuttgart, Germany and namesake of Stuttgart Airport.
2001-08-17T16:25:15Z
2023-12-31T16:20:07Z
[ "Template:Overly detailed", "Template:Cite book", "Template:NDB", "Template:Main", "Template:ISSN", "Template:Cite journal", "Template:PM20", "Template:Refbegin", "Template:Subject bar", "Template:Dead link", "Template:Cite episode", "Template:Cite AV media", "Template:Reflist", "Template:Cite web", "Template:Use dmy dates", "Template:Clarify", "Template:See also", "Template:Navboxes", "Template:R", "Template:Excessive citations inline", "Template:Efn", "Template:Pp-pc", "Template:Infobox military person", "Template:Page needed", "Template:Ship", "Template:Further information", "Template:Ill", "Template:Nbsp", "Template:Citation needed", "Template:TOC limit", "Template:Convert", "Template:YouTube", "Template:Refn", "Template:ISBN", "Template:Very long", "Template:IPA-de", "Template:Lang-de", "Template:Interlanguage link", "Template:Sfn", "Template:Illm", "Template:Cbignore", "Template:Erwin Rommel", "Template:Further", "Template:Cite news", "Template:Authority control", "Template:Notelist", "Template:Citation", "Template:Cite contribution", "Template:Internet Archive author", "Template:Refend", "Template:Short description", "Template:Redirect", "Template:Use British English", "Template:Blockquote" ]
https://en.wikipedia.org/wiki/Erwin_Rommel
9,518
Edmund Husserl
Edmund Gustav Albrecht Husserl (/ˈhʊsɜːrl/ HUUSS-url, US also /ˈhʊsərəl/ HUUSS-ər-əl, German: [ˈɛtmʊnt ˈhʊsɐl]; 8 April 1859 – 27 April 1938) was an Austrian-German philosopher and mathematician who established the school of phenomenology. In his early work, he elaborated critiques of historicism and of psychologism in logic based on analyses of intentionality. In his mature work, he sought to develop a systematic foundational science based on the so-called phenomenological reduction. Arguing that transcendental consciousness sets the limits of all possible knowledge, Husserl redefined phenomenology as a transcendental-idealist philosophy. Husserl's thought profoundly influenced 20th-century philosophy, and he remains a notable figure in contemporary philosophy and beyond. Husserl studied mathematics, taught by Karl Weierstrass and Leo Königsberger, and philosophy taught by Franz Brentano and Carl Stumpf. He taught philosophy as a Privatdozent at Halle from 1887, then as professor, first at Göttingen from 1901, then at Freiburg from 1916 until he retired in 1928, after which he remained highly productive. In 1933, under racial laws of the Nazi Party, Husserl was expelled from the library of the University of Freiburg due to his Jewish family background and months later resigned from the Deutsche Akademie. Following an illness, he died in Freiburg in 1938. Husserl was born in 1859 in Proßnitz in the Margraviate of Moravia in the Austrian Empire (today Prostějov in the Czech Republic). He was born into a Jewish family, the second of four children. His father was a milliner. His childhood was spent in Prostějov, where he attended the secular primary school. Then Husserl traveled to Vienna to study at the Realgymnasium there, followed next by the Staatsgymnasium in Olmütz. At the University of Leipzig from 1876 to 1878, Husserl studied mathematics, physics, and astronomy. At Leipzig, he was inspired by philosophy lectures given by Wilhelm Wundt, one of the founders of modern psychology. Then he moved to the Frederick William University of Berlin (the present-day Humboldt University of Berlin) in 1878 where he continued his study of mathematics under Leopold Kronecker and the renowned Karl Weierstrass. In Berlin he found a mentor in Tomáš Garrigue Masaryk, then a former philosophy student of Franz Brentano and later the first president of Czechoslovakia. There Husserl also attended Friedrich Paulsen's philosophy lectures. In 1881 he left for the University of Vienna to complete his mathematics studies under the supervision of Leo Königsberger (a former student of Weierstrass). At Vienna in 1883 he obtained his PhD with the work Beiträge zur Variationsrechnung (Contributions to the Calculus of Variations). Evidently as a result of his becoming familiar with the New Testament during his twenties, Husserl asked to be baptized into the Lutheran Church in 1886. Husserl's father Adolf had died in 1884. Herbert Spiegelberg writes, "While outward religious practice never entered his life any more than it did that of most academic scholars of the time, his mind remained open for the religious phenomenon as for any other genuine experience." At times Husserl saw his goal as one of moral "renewal". Although a steadfast proponent of a radical and rational autonomy in all things, Husserl could also speak "about his vocation and even about his mission under God's will to find new ways for philosophy and science," observes Spiegelberg. Following his PhD in mathematics, Husserl returned to Berlin to work as the assistant to Karl Weierstrass. Yet already Husserl had felt the desire to pursue philosophy. Then professor Weierstrass became very ill. Husserl became free to return to Vienna where, after serving a short military duty, he devoted his attention to philosophy. In 1884 at the University of Vienna he attended the lectures of Franz Brentano on philosophy and philosophical psychology. Brentano introduced him to the writings of Bernard Bolzano, Hermann Lotze, J. Stuart Mill, and David Hume. Husserl was so impressed by Brentano that he decided to dedicate his life to philosophy; indeed, Franz Brentano is often credited as being his most important influence, e.g., with regard to intentionality. Following academic advice, two years later in 1886 Husserl followed Carl Stumpf, a former student of Brentano, to the University of Halle, seeking to obtain his habilitation which would qualify him to teach at the university level. There, under Stumpf's supervision, he wrote Über den Begriff der Zahl (On the Concept of Number) in 1887, which would serve later as the basis for his first important work, Philosophie der Arithmetik (1891). In 1887 Husserl married Malvine Steinschneider, a union that would last over fifty years. In 1892 their daughter Elizabeth was born, in 1893 their son Gerhart, and in 1894 their son Wolfgang. Elizabeth would marry in 1922, and Gerhart in 1923; Wolfgang, however, became a casualty of the First World War. Gerhart would become a philosopher of law, contributing to the subject of comparative law, teaching in the United States and after the war in Austria. Following his marriage Husserl began his long teaching career in philosophy. He started in 1887 as a Privatdozent at the University of Halle. In 1891 he published his Philosophie der Arithmetik. Psychologische und logische Untersuchungen which, drawing on his prior studies in mathematics and philosophy, proposed a psychological context as the basis of mathematics. It drew the adverse notice of Gottlob Frege, who criticized its psychologism. In 1901 Husserl with his family moved to the University of Göttingen, where he taught as extraordinarius professor. Just prior to this a major work of his, Logische Untersuchungen (Halle, 1900–1901), was published. Volume One contains seasoned reflections on "pure logic" in which he carefully refutes "psychologism". This work was well received and became the subject of a seminar given by Wilhelm Dilthey; Husserl in 1905 traveled to Berlin to visit Dilthey. Two years later in Italy he paid a visit to Franz Brentano his inspiring old teacher and to Constantin Carathéodory the mathematician. Kant and Descartes were also now influencing his thought. In 1910 he became joint editor of the journal Logos. During this period Husserl had delivered lectures on internal time consciousness, which several decades later his former student Heidegger edited for publication. In 1912 at Freiburg the journal Jahrbuch für Philosophie und Phänomenologische Forschung ("Yearbook for Philosophy and Phenomenological Research") was founded by Husserl and his school, and which published articles of their phenomenological movement from 1913 to 1930. His important work Ideen was published in its first issue (Vol. 1, Issue 1, 1913). Before beginning Ideen, Husserl's thought had reached the stage where "each subject is 'presented' to itself, and to each all others are 'presentiated' (Vergegenwärtigung), not as parts of nature but as pure consciousness". Ideen advanced his transition to a "transcendental interpretation" of phenomenology, a view later criticized by, among others, Jean-Paul Sartre. In Ideen Paul Ricœur sees the development of Husserl's thought as leading "from the psychological cogito to the transcendental cogito". As phenomenology further evolves, it leads (when viewed from another vantage point in Husserl's 'labyrinth') to "transcendental subjectivity". Also in Ideen Husserl explicitly elaborates the phenomenological and eidetic reductions. Ivan Ilyin and Karl Jaspers visited Husserl at Göttingen. In October 1914 both his sons were sent to fight on the Western Front of World War I, and the following year one of them, Wolfgang Husserl, was badly injured. On 8 March 1916, on the battlefield of Verdun, Wolfgang was killed in action. The next year his other son Gerhart Husserl was wounded in the war but survived. His own mother Julia died. In November 1917 one of his outstanding students and later a noted philosophy professor in his own right, Adolf Reinach, was killed in the war while serving in Flanders. Husserl had transferred in 1916 to the University of Freiburg (in Freiburg im Breisgau) where he continued bringing his work in philosophy to fruition, now as a full professor. Edith Stein served as his personal assistant during his first few years in Freiburg, followed later by Martin Heidegger from 1920 to 1923. The mathematician Hermann Weyl began corresponding with him in 1918. Husserl gave four lectures on Phenomenological method at University College London in 1922. The University of Berlin in 1923 called on him to relocate there, but he declined the offer. In 1926 Heidegger dedicated his book Sein und Zeit (Being and Time) to him "in grateful respect and friendship." Husserl remained in his professorship at Freiburg until he requested retirement, teaching his last class on 25 July 1928. A Festschrift to celebrate his seventieth birthday was presented to him on 8 April 1929. Despite retirement, Husserl gave several notable lectures. The first, at Paris in 1929, led to Méditations cartésiennes (Paris 1931). Husserl here reviews the phenomenological epoché (or phenomenological reduction), presented earlier in his pivotal Ideen (1913), in terms of a further reduction of experience to what he calls a 'sphere of ownness.' From within this sphere, which Husserl enacts in order to show the impossibility of solipsism, the transcendental ego finds itself always already paired with the lived body of another ego, another monad. This 'a priori' interconnection of bodies, given in perception, is what founds the interconnection of consciousnesses known as transcendental intersubjectivity, which Husserl would go on to describe at length in volumes of unpublished writings. There has been a debate over whether or not Husserl's description of ownness and its movement into intersubjectivity is sufficient to reject the charge of solipsism, to which Descartes, for example, was subject. One argument against Husserl's description works this way: instead of infinity and the Deity being the ego's gateway to the Other, as in Descartes, Husserl's ego in the Cartesian Meditations itself becomes transcendent. It remains, however, alone (unconnected). Only the ego's grasp "by analogy" of the Other (e.g., by conjectural reciprocity) allows the possibility for an 'objective' intersubjectivity, and hence for community. In 1933, the racial laws of the new National Socialist German Workers Party were enacted. On 6 April Husserl was banned from using the library at the University of Freiburg, or any other academic library; the following week, after a public outcry, he was reinstated. Yet his colleague Heidegger was elected Rector of the university on 21–22 April, and joined the Nazi Party. By contrast, in July Husserl resigned from the Deutsche Akademie. Later Husserl lectured at Prague in 1935 and Vienna in 1936, which resulted in a very differently styled work that, while innovative, is no less problematic: Die Krisis (Belgrade 1936). Husserl describes here the cultural crisis gripping Europe, then approaches a philosophy of history, discussing Galileo, Descartes, several British philosophers, and Kant. The apolitical Husserl before had specifically avoided such historical discussions, pointedly preferring to go directly to an investigation of consciousness. Merleau-Ponty and others question whether Husserl here does not undercut his own position, in that Husserl had attacked in principle historicism, while specifically designing his phenomenology to be rigorous enough to transcend the limits of history. On the contrary, Husserl may be indicating here that historical traditions are merely features given to the pure ego's intuition, like any other. A longer section follows on the "lifeworld" [Lebenswelt], one not observed by the objective logic of science, but a world seen in our subjective experience. Yet a problem arises similar to that dealing with 'history' above, a chicken-and-egg problem. Does the lifeworld contextualize and thus compromise the gaze of the pure ego, or does the phenomenological method nonetheless raise the ego up transcendent? These last writings presented the fruits of his professional life. Since his university retirement Husserl had "worked at a tremendous pace, producing several major works." After suffering a fall in the autumn of 1937, the philosopher became ill with pleurisy. Edmund Husserl died in Freiburg on 27 April 1938, having just turned 79. His wife Malvine survived him. Eugen Fink, his research assistant, delivered his eulogy. Gerhard Ritter was the only Freiburg faculty member to attend the funeral, as an anti-Nazi protest. Husserl was rumoured to have been denied the use of the library at Freiburg as a result of the anti-Jewish legislation of April 1933. However, among other disabilities Husserl was unable to publish his works in Nazi Germany [see above footnote to Die Krisis (1936)]. It was also rumoured that his former pupil Martin Heidegger informed Husserl that he was discharged, but it was actually the previous rector. Apparently Husserl and Heidegger had moved apart during the 1920s, which became clearer after 1928 when Husserl retired and Heidegger succeeded to his university chair. In the summer of 1929 Husserl had studied carefully selected writings of Heidegger, coming to the conclusion that on several of their key positions they differed: e.g., Heidegger substituted Dasein ["Being-there"] for the pure ego, thus transforming phenomenology into an anthropology, a type of psychologism strongly disfavored by Husserl. Such observations of Heidegger, along with a critique of Max Scheler, were put into a lecture Husserl gave to various Kant Societies in Frankfurt, Berlin, and Halle during 1931 entitled Phänomenologie und Anthropologie. In the war-time 1941 edition of Heidegger's primary work, Being and Time (Sein und Zeit, first published in 1927), the original dedication to Husserl was removed. This was not due to a negation of the relationship between the two philosophers, however, but rather was the result of a suggested censorship by Heidegger's publisher who feared that the book might otherwise be banned by the Nazi regime. The dedication can still be found in a footnote on page 38, thanking Husserl for his guidance and generosity. Husserl, of course, had died three years earlier. In post-war editions of Sein und Zeit the dedication to Husserl is restored. The complex, troubled, and sundered philosophical relationship between Husserl and Heidegger has been widely discussed. On 4 May 1933, Professor Edmund Husserl addressed the recent regime change in Germany and its consequences: The future alone will judge which was the true Germany in 1933, and who were the true Germans—those who subscribe to the more or less materialistic-mythical racial prejudices of the day, or those Germans pure in heart and mind, heirs to the great Germans of the past whose tradition they revere and perpetuate. After his death, Husserl's manuscripts, amounting to approximately 40,000 pages of "Gabelsberger" stenography and his complete research library, were in 1939 smuggled to the Catholic University of Louvain in Belgium by the Franciscan priest Herman Van Breda. There they were deposited at Leuven to form the Husserl-Archives of the Higher Institute of Philosophy. Much of the material in his research manuscripts has since been published in the Husserliana critical edition series. In his first works, Husserl combined mathematics, psychology, and philosophy with the goal of providing a sound foundation for mathematics. He analyzed the psychological process needed to obtain the concept of number and then built up a theory on this analysis. He used methods and concepts taken from his teachers. From Weierstrass he derived the idea of generating the concept of number by counting a certain collection of objects. From Brentano and Stumpf he took the distinction between proper and improper presenting. In an example, Husserl explained this in the following way: if you are standing in front of a house, you have a proper, direct presentation of that house, but if you are looking for it and ask for directions, then these directions (e.g. the house on the corner of this and that street) are an indirect, improper presentation. In other words, you can have a proper presentation of an object if it is actually present, and an improper (or symbolic, as he also calls it) one if you only can indicate that object through signs, symbols, etc. Husserl's Logical Investigations (1900–1901) is considered the starting point for the formal theory of wholes and their parts known as mereology. Another important element that Husserl took over from Brentano was intentionality, the notion that the main characteristic of consciousness is that it is always intentional. While often simplistically summarised as "aboutness" or the relationship between mental acts and the external world, Brentano defined it as the main characteristic of mental phenomena, by which they could be distinguished from physical phenomena. Every mental phenomenon, every psychological act, has a content, is directed at an object (the intentional object). Every belief, desire, etc. has an object that it is about: the believed, the wanted. Brentano used the expression "intentional inexistence" to indicate the status of the objects of thought in the mind. The property of being intentional, of having an intentional object, was the key feature to distinguish mental phenomena and physical phenomena, because physical phenomena lack intentionality altogether. Some years after the 1900–1901 publication of his main work, the Logische Untersuchungen (Logical Investigations), Husserl made some key conceptual elaborations which led him to assert that in order to study the structure of consciousness, one would have to distinguish between the act of consciousness and the phenomena at which it is directed (the objects as intended). Knowledge of essences would only be possible by "bracketing" all assumptions about the existence of an external world. This procedure he called "epoché". These new concepts prompted the publication of the Ideen (Ideas) in 1913, in which they were at first incorporated, and a plan for a second edition of the Logische Untersuchungen. From the Ideen onward, Husserl concentrated on the ideal, essential structures of consciousness. The metaphysical problem of establishing the reality of what we perceive, as distinct from the perceiving subject, was of little interest to Husserl in spite of his being a transcendental idealist. Husserl proposed that the world of objects—and of ways in which we direct ourselves toward and perceive those objects—is normally conceived of in what he called the "natural attitude", which is characterized by a belief that objects exist distinct from the perceiving subject and exhibit properties that we see as emanating from them (this attitude is also called physicalist objectivism). Husserl proposed a radical new phenomenological way of looking at objects by examining how we, in our many ways of being intentionally directed toward them, actually "constitute" them (to be distinguished from materially creating objects or objects merely being figments of the imagination); in the Phenomenological standpoint, the object ceases to be something simply "external" and ceases to be seen as providing indicators about what it is, and becomes a grouping of perceptual and functional aspects that imply one another under the idea of a particular object or "type". The notion of objects as real is not expelled by phenomenology, but "bracketed" as a way in which we regard objects—instead of a feature that inheres in an object's essence founded in the relation between the object and the perceiver. In order to better understand the world of appearances and objects, phenomenology attempts to identify the invariant features of how objects are perceived and pushes attributions of reality into their role as an attribution about the things we perceive (or an assumption underlying how we perceive objects). The major dividing line in Husserl's thought is the turn to transcendental idealism. In a later period, Husserl began to wrestle with the complicated issues of intersubjectivity, specifically, how communication about an object can be assumed to refer to the same ideal entity (Cartesian Meditations, Meditation V). Husserl tries new methods of bringing his readers to understand the importance of phenomenology to scientific inquiry (and specifically to psychology) and what it means to "bracket" the natural attitude. The Crisis of the European Sciences is Husserl's unfinished work that deals most directly with these issues. In it, Husserl for the first time attempts a historical overview of the development of Western philosophy and science, emphasizing the challenges presented by their increasingly one-sidedly empirical and naturalistic orientation. Husserl declares that mental and spiritual reality possess their own reality independent of any physical basis, and that a science of the mind ('Geisteswissenschaft') must be established on as scientific a foundation as the natural sciences have managed: "It is my conviction that intentional phenomenology has for the first time made spirit as spirit the field of systematic scientific experience, thus effecting a total transformation of the task of knowledge." Husserl's thought is revolutionary in several ways, most notably in the distinction between "natural" and "phenomenological" modes of understanding. In the former, sense-perception in correspondence with the material realm constitutes the known reality, and understanding is premised on the accuracy of the perception and the objective knowability of what is called the "real world". Phenomenological understanding strives to be rigorously "presuppositionless" by means of what Husserl calls "phenomenological reduction". This reduction is not conditioned but rather transcendental: in Husserl's terms, pure consciousness of absolute Being. In Husserl's work, consciousness of any given thing calls for discerning its meaning as an "intentional object". Such an object does not simply strike the senses, to be interpreted or misinterpreted by mental reason; it has already been selected and grasped, grasping being an etymological connotation, of percipere, the root of "perceive". From Logical Investigations (1900/1901) to Experience and Judgment (published in 1939), Husserl expressed clearly the difference between meaning and object. He identified several different kinds of names. For example, there are names that have the role of properties that uniquely identify an object. Each of these names expresses a meaning and designates the same object. Examples of this are "the victor in Jena" and "the loser in Waterloo", or "the equilateral triangle" and "the equiangular triangle"; in both cases, both names express different meanings, but designate the same object. There are names which have no meaning, but have the role of designating an object: "Aristotle", "Socrates", and so on. Finally, there are names which designate a variety of objects. These are called "universal names"; their meaning is a "concept" and refers to a series of objects (the extension of the concept). The way we know sensible objects is called "sensible intuition". Husserl also identifies a series of "formal words" which are necessary to form sentences and have no sensible correlates. Examples of formal words are "a", "the", "more than", "over", "under", "two", "group", and so on. Every sentence must contain formal words to designate what Husserl calls "formal categories". There are two kinds of categories: meaning categories and formal-ontological categories. Meaning categories relate judgments; they include forms of conjunction, disjunction, forms of plural, among others. Formal-ontological categories relate objects and include notions such as set, cardinal number, ordinal number, part and whole, relation, and so on. The way we know these categories is through a faculty of understanding called "categorial intuition". Through sensible intuition our consciousness constitutes what Husserl calls a "situation of affairs" (Sachlage). It is a passive constitution where objects themselves are presented to us. To this situation of affairs, through categorial intuition, we are able to constitute a "state of affairs" (Sachverhalt). One situation of affairs through objective acts of consciousness (acts of constituting categorially) can serve as the basis for constituting multiple states of affairs. For example, suppose a and b are two sensible objects in a certain situation of affairs. We can use it as basis to say, "a<b" and "b>a", two judgments which designate the same state of affairs. For Husserl a sentence has a proposition or judgment as its meaning, and refers to a state of affairs which has a situation of affairs as a reference base. Husserl sees ontology as a science of essences. Sciences of essences are contrasted with factual sciences: the former are knowable a priori and provide the foundation for the later, which are knowable a posteriori. Ontology as a science of essences is not interested in actual facts, but in the essences themselves, whether they have instances or not. Husserl distinguishes between formal ontology, which investigates the essence of objectivity in general, and regional ontologies, which study regional essences that are shared by all entities belonging to the region. Regions correspond to the highest genera of concrete entities: material nature, personal consciousness and interpersonal spirit. Husserl's method for studying ontology and sciences of essence in general is called eidetic variation. It involves imagining an object of the kind under investigation and varying its features. The changed feature is inessential to this kind if the object can survive its change, otherwise it belongs to the kind's essence. For example, a triangle remains a triangle if one of its sides is extended but it ceases to be a triangle if a fourth side is added. Regional ontology involves applying this method to the essences corresponding to the highest genera. Husserl believed that truth-in-itself has as ontological correlate being-in-itself, just as meaning categories have formal-ontological categories as correlates. Logic is a formal theory of judgment, that studies the formal a priori relations among judgments using meaning categories. Mathematics, on the other hand, is formal ontology; it studies all the possible forms of being (of objects). Hence for both logic and mathematics, the different formal categories are the objects of study, not the sensible objects themselves. The problem with the psychological approach to mathematics and logic is that it fails to account for the fact that this approach is about formal categories, and not simply about abstractions from sensibility alone. The reason why we do not deal with sensible objects in mathematics is because of another faculty of understanding called "categorial abstraction." Through this faculty we are able to get rid of sensible components of judgments, and just focus on formal categories themselves. Thanks to "eidetic reduction" (or "essential intuition"), we are able to grasp the possibility, impossibility, necessity and contingency among concepts and among formal categories. Categorial intuition, along with categorial abstraction and eidetic reduction, are the basis for logical and mathematical knowledge. Husserl criticized the logicians of his day for not focusing on the relation between subjective processes that give us objective knowledge of pure logic. All subjective activities of consciousness need an ideal correlate, and objective logic (constituted noematically) as it is constituted by consciousness needs a noetic correlate (the subjective activities of consciousness). Husserl stated that logic has three strata, each further away from consciousness and psychology than those that precede it. The ontological correlate to the third stratum is the "theory of manifolds". In formal ontology, it is a free investigation where a mathematician can assign several meanings to several symbols, and all their possible valid deductions in a general and indeterminate manner. It is, properly speaking, the most universal mathematics of all. Through the posit of certain indeterminate objects (formal-ontological categories) as well as any combination of mathematical axioms, mathematicians can explore the apodeictic connections between them, as long as consistency is preserved. According to Husserl, this view of logic and mathematics accounted for the objectivity of a series of mathematical developments of his time, such as n-dimensional manifolds (both Euclidean and non-Euclidean), Hermann Grassmann's theory of extensions, William Rowan Hamilton's Hamiltonians, Sophus Lie's theory of transformation groups, and Cantor's set theory. Jacob Klein was one student of Husserl who pursued this line of inquiry, seeking to "desedimentize" mathematics and the mathematical sciences. After obtaining his PhD in mathematics, Husserl began analyzing the foundations of mathematics from a psychological point of view. In his habilitation thesis, On the Concept of Number (1886) and in his Philosophy of Arithmetic (1891), Husserl sought, by employing Brentano's descriptive psychology, to define the natural numbers in a way that advanced the methods and techniques of Karl Weierstrass, Richard Dedekind, Georg Cantor, Gottlob Frege, and other contemporary mathematicians. Later, in the first volume of his Logical Investigations, the Prolegomena of Pure Logic, Husserl, while attacking the psychologistic point of view in logic and mathematics, also appears to reject much of his early work, although the forms of psychologism analysed and refuted in the Prolegomena did not apply directly to his Philosophy of Arithmetic. Some scholars question whether Frege's negative review of the Philosophy of Arithmetic helped turn Husserl towards modern Platonism, but he had already discovered the work of Bernard Bolzano independently around 1890/91. In his Logical Investigations, Husserl explicitly mentioned Bolzano, G. W. Leibniz and Hermann Lotze as inspirations for his newer position. Husserl's review of Ernst Schröder, published before Frege's landmark 1892 article, clearly distinguishes sense from reference; thus Husserl's notions of noema and object also arose independently. Likewise, in his criticism of Frege in the Philosophy of Arithmetic, Husserl remarks on the distinction between the content and the extension of a concept. Moreover, the distinction between the subjective mental act, namely the content of a concept, and the (external) object, was developed independently by Brentano and his school, and may have surfaced as early as Brentano's 1870s lectures on logic. Scholars such as J. N. Mohanty, Claire Ortiz Hill, and Guillermo E. Rosado Haddock, among others, have argued that Husserl's so-called change from psychologism to Platonism came about independently of Frege's review. For example, the review falsely accuses Husserl of subjectivizing everything, so that no objectivity is possible, and falsely attributes to him a notion of abstraction whereby objects disappear until we are left with numbers as mere ghosts. Contrary to what Frege states, in Husserl's Philosophy of Arithmetic we already find two different kinds of representations: subjective and objective. Moreover, objectivity is clearly defined in that work. Frege's attack seems to be directed at certain foundational doctrines then current in Weierstrass's Berlin School, of which Husserl and Cantor cannot be said to be orthodox representatives. Furthermore, various sources indicate that Husserl changed his mind about psychologism as early as 1890, a year before he published the Philosophy of Arithmetic. Husserl stated that by the time he published that book, he had already changed his mind—that he had doubts about psychologism from the very outset. He attributed this change of mind to his reading of Leibniz, Bolzano, Lotze, and David Hume. Husserl makes no mention of Frege as a decisive factor in this change. In his Logical Investigations, Husserl mentions Frege only twice, once in a footnote to point out that he had retracted three pages of his criticism of Frege's The Foundations of Arithmetic, and again to question Frege's use of the word Bedeutung to designate "reference" rather than "meaning" (sense). In a letter dated 24 May 1891, Frege thanked Husserl for sending him a copy of the Philosophy of Arithmetic and Husserl's review of Ernst Schröder's Vorlesungen über die Algebra der Logik. In the same letter, Frege used the review of Schröder's book to analyze Husserl's notion of the sense of reference of concept words. Hence Frege recognized, as early as 1891, that Husserl distinguished between sense and reference. Consequently, Frege and Husserl independently elaborated a theory of sense and reference before 1891. Commentators argue that Husserl's notion of noema has nothing to do with Frege's notion of sense, because noemata are necessarily fused with noeses which are the conscious activities of consciousness. Noemata have three different levels: Consequently, in intentional activities, even non-existent objects can be constituted, and form part of the whole noema. Frege, however, did not conceive of objects as forming parts of senses: If a proper name denotes a non-existent object, it does not have a reference, hence concepts with no objects have no truth value in arguments. Moreover, Husserl did not maintain that predicates of sentences designate concepts. According to Frege the reference of a sentence is a truth value; for Husserl it is a "state of affairs." Frege's notion of "sense" is unrelated to Husserl's noema, while the latter's notions of "meaning" and "object" differ from those of Frege. In detail, Husserl's conception of logic and mathematics differs from that of Frege, who held that arithmetic could be derived from logic. For Husserl this is not the case: mathematics (with the exception of geometry) is the ontological correlate of logic, and while both fields are related, neither one is strictly reducible to the other. Reacting against authors such as J. S. Mill, Christoph von Sigwart and his own former teacher Brentano, Husserl criticised their psychologism in mathematics and logic, i.e. their conception of these abstract and a priori sciences as having an essentially empirical foundation and a prescriptive or descriptive nature. According to psychologism, logic would not be an autonomous discipline, but a branch of psychology, either proposing a prescriptive and practical "art" of correct judgement (as Brentano and some of his more orthodox students did) or a description of the factual processes of human thought. Husserl pointed out that the failure of anti-psychologists to defeat psychologism was a result of being unable to distinguish between the foundational, theoretical side of logic, and the applied, practical side. Pure logic does not deal at all with "thoughts" or "judgings" as mental episodes but about a priori laws and conditions for any theory and any judgments whatsoever, conceived as propositions in themselves. Since "truth-in-itself" has "being-in-itself" as ontological correlate, and since psychologists reduce truth (and hence logic) to empirical psychology, the inevitable consequence is scepticism. Psychologists have also not been successful in showing how from induction or psychological processes we can justify the absolute certainty of logical principles, such as the principles of identity and non-contradiction. It is therefore futile to base certain logical laws and principles on uncertain processes of the mind. This confusion made by psychologism (and related disciplines such as biologism and anthropologism) can be due to three specific prejudices: 1. The first prejudice is the supposition that logic is somehow normative in nature. Husserl argues that logic is theoretical, i.e., that logic itself proposes a priori laws which are themselves the basis of the normative side of logic. Since mathematics is related to logic, he cites an example from mathematics: If we have a formula like "(a + b)(a – b) = a² – b²" it does not tell us how to think mathematically. It just expresses a truth. A proposition that says: "The product of the sum and the difference of a and b should give us the difference of the squares of a and b" does express a normative proposition, but this normative statement is based on the theoretical statement "(a + b)(a – b) = a² – b²". 2. For psychologists, the acts of judging, reasoning, deriving, and so on, are all psychological processes. Therefore, it is the role of psychology to provide the foundation of these processes. Husserl states that this effort made by psychologists is a "metábasis eis állo génos" (Gr. μετάβασις εἰς ἄλλο γένος, "a transgression to another field"). It is a metábasis because psychology cannot provide any foundations for a priori laws which themselves are the basis for all the ways we should think correctly. Psychologists have the problem of confusing intentional activities with the object of these activities. It is important to distinguish between the act of judging and the judgment itself, the act of counting and the number itself, and so on. Counting five objects is undeniably a psychological process, but the number 5 is not. 3. Judgments can be true or not true. Psychologists argue that judgments are true because they become "evidently" true to us. This evidence, a psychological process that "guarantees" truth, is indeed a psychological process. Husserl responds by saying that truth itself, as well as logical laws, always remain valid regardless of psychological "evidence" that they are true. No psychological process can explain the a priori objectivity of these logical truths. From this criticism to psychologism, the distinction between psychological acts and their intentional objects, and the difference between the normative side of logic and the theoretical side, derives from a Platonist conception of logic. This means that we should regard logical and mathematical laws as being independent of the human mind, and also as an autonomy of meanings. It is essentially the difference between the real (everything subject to time) and the ideal or irreal (everything that is atemporal), such as logical truths, mathematical entities, mathematical truths and meanings in general. David Carr commented on Husserl's following in his 1970 dissertation at Yale: "It is well known that Husserl was always disappointed at the tendency of his students to go their own way, to embark upon fundamental revisions of phenomenology rather than engage in the communal task" as originally intended by the radical new science. Notwithstanding, he did attract philosophers to phenomenology. Martin Heidegger is the best known of Husserl's students, the one whom Husserl chose as his successor at Freiburg. Heidegger's magnum opus Being and Time was dedicated to Husserl. They shared their thoughts and worked alongside each other for over a decade at the University of Freiburg, Heidegger being Husserl's assistant during 1920–1923. Heidegger's early work followed his teacher, but with time he began to develop new insights distinctively variant. Husserl became increasingly critical of Heidegger's work, especially in 1929, and included pointed criticism of Heidegger in lectures he gave during 1931. Heidegger, while acknowledging his debt to Husserl, followed a political position offensive and harmful to Husserl after the Nazis came to power in 1933, Husserl being of Jewish origin and Heidegger infamously being then a Nazi proponent. Academic discussion of Husserl and Heidegger is extensive. At Göttingen in 1913 Adolf Reinach (1884–1917) "was now Husserl's right hand. He was above all the mediator between Husserl and the students, for he understood extremely well how to deal with other persons, whereas Husserl was pretty much helpless in this respect." He was an original editor of Husserl's new journal, Jahrbuch; one of his works (giving a phenomenological analysis of the law of obligations) appeared in its first issue. Reinach was widely admired and a remarkable teacher. Husserl, in his 1917 obituary, wrote, "He wanted to draw only from the deepest sources, he wanted to produce only work of enduring value. And through his wise restraint he succeeded in this." Edith Stein was Husserl's student at Göttingen and Freiburg while she wrote her doctoral thesis The Empathy Problem as it Developed Historically and Considered Phenomenologically (1916). She then became his assistant at Freiburg in 1916–18. She later adapted her phenomenology to the modern school of modern Thomism. Ludwig Landgrebe became assistant to Husserl in 1923. From 1939 he collaborated with Eugen Fink at the Husserl-Archives in Leuven. In 1954 he became leader of the Husserl-Archives. Landgrebe is known as one of Husserl's closest associates, but also for his independent views relating to history, religion and politics as seen from the viewpoints of existentialist philosophy and metaphysics. Eugen Fink was a close associate of Husserl during the 1920s and 1930s. He wrote the Sixth Cartesian Meditation which Husserl said was the truest expression and continuation of his own work. Fink delivered the eulogy for Husserl in 1938. Roman Ingarden, an early student of Husserl at Freiburg, corresponded with Husserl into the mid-1930s. Ingarden did not accept, however, the later transcendental idealism of Husserl which he thought would lead to relativism. Ingarden has written his work in German and Polish. In his Spór o istnienie świata (Ger.: "Der Streit um die Existenz der Welt", Eng.: "Dispute about existence of the world") he created his own realistic position, which also helped to spread phenomenology in Poland. Max Scheler met Husserl in Halle in 1901 and found in his phenomenology a methodological breakthrough for his own philosophy. Scheler, who was at Göttingen when Husserl taught there, was one of the original few editors of the journal Jahrbuch für Philosophie und Phänomenologische Forschung (1913). Scheler's work Formalism in Ethics and Nonformal Ethics of Value appeared in the new journal (1913 and 1916) and drew acclaim. The personal relationship between the two men, however, became strained, due to Scheler's legal troubles, and Scheler returned to Munich. Although Scheler later criticised Husserl's idealistic logical approach and proposed instead a "phenomenology of love", he states that he remained "deeply indebted" to Husserl throughout his work. Nicolai Hartmann was once thought to be at the center of phenomenology, but perhaps no longer. In 1921 the prestige of Hartmann the Neo-Kantian, who was Professor of Philosophy at Marburg, was added to the Movement; he "publicly declared his solidarity with the actual work of die Phänomenologie." Yet Hartmann's connections were with Max Scheler and the Munich circle; Husserl himself evidently did not consider him as a phenomenologist. His philosophy, however, is said to include an innovative use of the method. Emmanuel Levinas in 1929 gave a presentation at one of Husserl's last seminars in Freiburg. Also that year he wrote on Husserl's Ideen (1913) a long review published by a French journal. With Gabrielle Peiffer, Levinas translated into French Husserl's Méditations cartésiennes (1931). He was at first impressed with Heidegger and began a book on him, but broke off the project when Heidegger became involved with the Nazis. After the war he wrote on Jewish spirituality; most of his family had been murdered by the Nazis in Lithuania. Levinas then began to write works that would become widely known and admired. Alfred Schutz's Phenomenology of the Social World seeks to rigorously ground Max Weber's interpretive sociology in Husserl's phenomenology. Husserl was impressed by this work and asked Schutz to be his assistant. Jean-Paul Sartre was also largely influenced by Husserl, although he later came to disagree with key points in his analyses. Sartre rejected Husserl's transcendental interpretations begun in his Ideen (1913) and instead followed Heidegger's ontology. Maurice Merleau-Ponty's Phenomenology of Perception is influenced by Edmund Husserl's work on perception, intersubjectivity, intentionality, space, and temporality, including Husserl's theory of retention and protention. Merleau-Ponty's description of 'motor intentionality' and sexuality, for example, retain the important structure of the noetic/noematic correlation of Ideen I, yet further concretize what it means for Husserl when consciousness particularizes itself into modes of intuition. Merleau-Ponty's most clearly Husserlian work is, perhaps, "the Philosopher and His Shadow." Depending on the interpretation of Husserl's accounts of eidetic intuition, given in Husserl's Phenomenological Psychology and Experience and Judgment, it may be that Merleau-Ponty did not accept the "eidetic reduction" nor the "pure essence" said to result. Merleau-Ponty was the first student to study at the Husserl-archives in Leuven. Gabriel Marcel explicitly rejected existentialism, due to Sartre, but not phenomenology, which has enjoyed a wide following among French Catholics. He appreciated Husserl, Scheler, and (but with apprehension) Heidegger. His expressions like "ontology of sensability" when referring to the body, indicate influence by phenomenological thought. Kurt Gödel is known to have read Cartesian Meditations. He expressed very strong appreciation for Husserl's work, especially with regard to "bracketing" or "epoché". Hermann Weyl's interest in intuitionistic logic and impredicativity appears to have resulted from his reading of Husserl. He was introduced to Husserl's work through his wife, Helene Joseph, herself a student of Husserl at Göttingen. Colin Wilson has used Husserl's ideas extensively in developing his "New Existentialism," particularly in regards to his "intentionality of consciousness," which he mentions in a number of his books. Rudolf Carnap was also influenced by Husserl, not only concerning Husserl's notion of essential insight that Carnap used in his Der Raum, but also his notion of "formation rules" and "transformation rules" is founded on Husserl's philosophy of logic. Karol Wojtyla, who would later become Pope John Paul II, was influenced by Husserl. Phenomenology appears in his major work, The Acting Person (1969). Originally published in Polish, it was translated by Andrzej Potocki and edited by Anna-Teresa Tymieniecka in the Analecta Husserliana. The Acting Person combines phenomenological work with Thomistic ethics. Paul Ricœur has translated many works of Husserl into French and has also written many of his own studies of the philosopher. Among other works, Ricœur employed phenomenology in his Freud and Philosophy (1965). Jacques Derrida wrote several critical studies of Husserl early in his academic career. These included his dissertation, The Problem of Genesis in Husserl's Philosophy, and also his introduction to The Origin of Geometry. Derrida continued to make reference to Husserl in works such as Of Grammatology. Stanisław Leśniewski and Kazimierz Ajdukiewicz were inspired by Husserl's formal analysis of language. Accordingly, they employed phenomenology in the development of categorial grammar. José Ortega y Gasset visited Husserl at Freiburg in 1934. He credited phenomenology for having 'liberated him' from a narrow neo-Kantian thought. While perhaps not a phenomenologist himself, he introduced the philosophy to Iberia and Latin America. Wilfrid Sellars, an influential figure in the so-called "Pittsburgh School" (Robert Brandom, John McDowell) had been a student of Marvin Farber, a pupil of Husserl, and was influenced by phenomenology through him: Marvin Farber led me through my first careful reading of the Critique of Pure Reason and introduced me to Husserl. His combination of utter respect for the structure of Husserl's thought with the equally firm conviction that this structure could be given a naturalistic interpretation was undoubtedly a key influence on my own subsequent philosophical strategy. In his 1942 essay The Myth of Sisyphus, absurdist philosopher Albert Camus acknowledges Husserl as a previous philosopher who described and attempted to deal with the feeling of the absurd, but claims he committed "philosophical suicide" by elevating reason and ultimately arriving at ubiquitous Platonic forms and an abstract god. Hans Blumenberg received his habilitation in 1950, with a dissertation on ontological distance, an inquiry into the crisis of Husserl's phenomenology. Roger Scruton, despite some disagreements with Husserl, drew upon his work in Sexual Desire (1986). The influence of the Husserlian phenomenological tradition in the 21st century extends beyond the confines of the European and North American legacies. It has already started to impact (indirectly) scholarship in Eastern and Oriental thought, including research on the impetus of philosophical thinking in the history of ideas in Islam.
[ { "paragraph_id": 0, "text": "Edmund Gustav Albrecht Husserl (/ˈhʊsɜːrl/ HUUSS-url, US also /ˈhʊsərəl/ HUUSS-ər-əl, German: [ˈɛtmʊnt ˈhʊsɐl]; 8 April 1859 – 27 April 1938) was an Austrian-German philosopher and mathematician who established the school of phenomenology.", "title": "" }, { "paragraph_id": 1, "text": "In his early work, he elaborated critiques of historicism and of psychologism in logic based on analyses of intentionality. In his mature work, he sought to develop a systematic foundational science based on the so-called phenomenological reduction. Arguing that transcendental consciousness sets the limits of all possible knowledge, Husserl redefined phenomenology as a transcendental-idealist philosophy. Husserl's thought profoundly influenced 20th-century philosophy, and he remains a notable figure in contemporary philosophy and beyond.", "title": "" }, { "paragraph_id": 2, "text": "Husserl studied mathematics, taught by Karl Weierstrass and Leo Königsberger, and philosophy taught by Franz Brentano and Carl Stumpf. He taught philosophy as a Privatdozent at Halle from 1887, then as professor, first at Göttingen from 1901, then at Freiburg from 1916 until he retired in 1928, after which he remained highly productive. In 1933, under racial laws of the Nazi Party, Husserl was expelled from the library of the University of Freiburg due to his Jewish family background and months later resigned from the Deutsche Akademie. Following an illness, he died in Freiburg in 1938.", "title": "" }, { "paragraph_id": 3, "text": "Husserl was born in 1859 in Proßnitz in the Margraviate of Moravia in the Austrian Empire (today Prostějov in the Czech Republic). He was born into a Jewish family, the second of four children. His father was a milliner. His childhood was spent in Prostějov, where he attended the secular primary school. Then Husserl traveled to Vienna to study at the Realgymnasium there, followed next by the Staatsgymnasium in Olmütz.", "title": "Life and career" }, { "paragraph_id": 4, "text": "At the University of Leipzig from 1876 to 1878, Husserl studied mathematics, physics, and astronomy. At Leipzig, he was inspired by philosophy lectures given by Wilhelm Wundt, one of the founders of modern psychology. Then he moved to the Frederick William University of Berlin (the present-day Humboldt University of Berlin) in 1878 where he continued his study of mathematics under Leopold Kronecker and the renowned Karl Weierstrass. In Berlin he found a mentor in Tomáš Garrigue Masaryk, then a former philosophy student of Franz Brentano and later the first president of Czechoslovakia. There Husserl also attended Friedrich Paulsen's philosophy lectures. In 1881 he left for the University of Vienna to complete his mathematics studies under the supervision of Leo Königsberger (a former student of Weierstrass). At Vienna in 1883 he obtained his PhD with the work Beiträge zur Variationsrechnung (Contributions to the Calculus of Variations).", "title": "Life and career" }, { "paragraph_id": 5, "text": "Evidently as a result of his becoming familiar with the New Testament during his twenties, Husserl asked to be baptized into the Lutheran Church in 1886. Husserl's father Adolf had died in 1884. Herbert Spiegelberg writes, \"While outward religious practice never entered his life any more than it did that of most academic scholars of the time, his mind remained open for the religious phenomenon as for any other genuine experience.\" At times Husserl saw his goal as one of moral \"renewal\". Although a steadfast proponent of a radical and rational autonomy in all things, Husserl could also speak \"about his vocation and even about his mission under God's will to find new ways for philosophy and science,\" observes Spiegelberg.", "title": "Life and career" }, { "paragraph_id": 6, "text": "Following his PhD in mathematics, Husserl returned to Berlin to work as the assistant to Karl Weierstrass. Yet already Husserl had felt the desire to pursue philosophy. Then professor Weierstrass became very ill. Husserl became free to return to Vienna where, after serving a short military duty, he devoted his attention to philosophy. In 1884 at the University of Vienna he attended the lectures of Franz Brentano on philosophy and philosophical psychology. Brentano introduced him to the writings of Bernard Bolzano, Hermann Lotze, J. Stuart Mill, and David Hume. Husserl was so impressed by Brentano that he decided to dedicate his life to philosophy; indeed, Franz Brentano is often credited as being his most important influence, e.g., with regard to intentionality. Following academic advice, two years later in 1886 Husserl followed Carl Stumpf, a former student of Brentano, to the University of Halle, seeking to obtain his habilitation which would qualify him to teach at the university level. There, under Stumpf's supervision, he wrote Über den Begriff der Zahl (On the Concept of Number) in 1887, which would serve later as the basis for his first important work, Philosophie der Arithmetik (1891).", "title": "Life and career" }, { "paragraph_id": 7, "text": "In 1887 Husserl married Malvine Steinschneider, a union that would last over fifty years. In 1892 their daughter Elizabeth was born, in 1893 their son Gerhart, and in 1894 their son Wolfgang. Elizabeth would marry in 1922, and Gerhart in 1923; Wolfgang, however, became a casualty of the First World War. Gerhart would become a philosopher of law, contributing to the subject of comparative law, teaching in the United States and after the war in Austria.", "title": "Life and career" }, { "paragraph_id": 8, "text": "Following his marriage Husserl began his long teaching career in philosophy. He started in 1887 as a Privatdozent at the University of Halle. In 1891 he published his Philosophie der Arithmetik. Psychologische und logische Untersuchungen which, drawing on his prior studies in mathematics and philosophy, proposed a psychological context as the basis of mathematics. It drew the adverse notice of Gottlob Frege, who criticized its psychologism.", "title": "Life and career" }, { "paragraph_id": 9, "text": "In 1901 Husserl with his family moved to the University of Göttingen, where he taught as extraordinarius professor. Just prior to this a major work of his, Logische Untersuchungen (Halle, 1900–1901), was published. Volume One contains seasoned reflections on \"pure logic\" in which he carefully refutes \"psychologism\". This work was well received and became the subject of a seminar given by Wilhelm Dilthey; Husserl in 1905 traveled to Berlin to visit Dilthey. Two years later in Italy he paid a visit to Franz Brentano his inspiring old teacher and to Constantin Carathéodory the mathematician. Kant and Descartes were also now influencing his thought. In 1910 he became joint editor of the journal Logos. During this period Husserl had delivered lectures on internal time consciousness, which several decades later his former student Heidegger edited for publication.", "title": "Life and career" }, { "paragraph_id": 10, "text": "In 1912 at Freiburg the journal Jahrbuch für Philosophie und Phänomenologische Forschung (\"Yearbook for Philosophy and Phenomenological Research\") was founded by Husserl and his school, and which published articles of their phenomenological movement from 1913 to 1930. His important work Ideen was published in its first issue (Vol. 1, Issue 1, 1913). Before beginning Ideen, Husserl's thought had reached the stage where \"each subject is 'presented' to itself, and to each all others are 'presentiated' (Vergegenwärtigung), not as parts of nature but as pure consciousness\". Ideen advanced his transition to a \"transcendental interpretation\" of phenomenology, a view later criticized by, among others, Jean-Paul Sartre. In Ideen Paul Ricœur sees the development of Husserl's thought as leading \"from the psychological cogito to the transcendental cogito\". As phenomenology further evolves, it leads (when viewed from another vantage point in Husserl's 'labyrinth') to \"transcendental subjectivity\". Also in Ideen Husserl explicitly elaborates the phenomenological and eidetic reductions. Ivan Ilyin and Karl Jaspers visited Husserl at Göttingen.", "title": "Life and career" }, { "paragraph_id": 11, "text": "In October 1914 both his sons were sent to fight on the Western Front of World War I, and the following year one of them, Wolfgang Husserl, was badly injured. On 8 March 1916, on the battlefield of Verdun, Wolfgang was killed in action. The next year his other son Gerhart Husserl was wounded in the war but survived. His own mother Julia died. In November 1917 one of his outstanding students and later a noted philosophy professor in his own right, Adolf Reinach, was killed in the war while serving in Flanders.", "title": "Life and career" }, { "paragraph_id": 12, "text": "Husserl had transferred in 1916 to the University of Freiburg (in Freiburg im Breisgau) where he continued bringing his work in philosophy to fruition, now as a full professor. Edith Stein served as his personal assistant during his first few years in Freiburg, followed later by Martin Heidegger from 1920 to 1923. The mathematician Hermann Weyl began corresponding with him in 1918. Husserl gave four lectures on Phenomenological method at University College London in 1922. The University of Berlin in 1923 called on him to relocate there, but he declined the offer. In 1926 Heidegger dedicated his book Sein und Zeit (Being and Time) to him \"in grateful respect and friendship.\" Husserl remained in his professorship at Freiburg until he requested retirement, teaching his last class on 25 July 1928. A Festschrift to celebrate his seventieth birthday was presented to him on 8 April 1929.", "title": "Life and career" }, { "paragraph_id": 13, "text": "Despite retirement, Husserl gave several notable lectures. The first, at Paris in 1929, led to Méditations cartésiennes (Paris 1931). Husserl here reviews the phenomenological epoché (or phenomenological reduction), presented earlier in his pivotal Ideen (1913), in terms of a further reduction of experience to what he calls a 'sphere of ownness.' From within this sphere, which Husserl enacts in order to show the impossibility of solipsism, the transcendental ego finds itself always already paired with the lived body of another ego, another monad. This 'a priori' interconnection of bodies, given in perception, is what founds the interconnection of consciousnesses known as transcendental intersubjectivity, which Husserl would go on to describe at length in volumes of unpublished writings. There has been a debate over whether or not Husserl's description of ownness and its movement into intersubjectivity is sufficient to reject the charge of solipsism, to which Descartes, for example, was subject. One argument against Husserl's description works this way: instead of infinity and the Deity being the ego's gateway to the Other, as in Descartes, Husserl's ego in the Cartesian Meditations itself becomes transcendent. It remains, however, alone (unconnected). Only the ego's grasp \"by analogy\" of the Other (e.g., by conjectural reciprocity) allows the possibility for an 'objective' intersubjectivity, and hence for community.", "title": "Life and career" }, { "paragraph_id": 14, "text": "In 1933, the racial laws of the new National Socialist German Workers Party were enacted. On 6 April Husserl was banned from using the library at the University of Freiburg, or any other academic library; the following week, after a public outcry, he was reinstated. Yet his colleague Heidegger was elected Rector of the university on 21–22 April, and joined the Nazi Party. By contrast, in July Husserl resigned from the Deutsche Akademie.", "title": "Life and career" }, { "paragraph_id": 15, "text": "Later Husserl lectured at Prague in 1935 and Vienna in 1936, which resulted in a very differently styled work that, while innovative, is no less problematic: Die Krisis (Belgrade 1936). Husserl describes here the cultural crisis gripping Europe, then approaches a philosophy of history, discussing Galileo, Descartes, several British philosophers, and Kant. The apolitical Husserl before had specifically avoided such historical discussions, pointedly preferring to go directly to an investigation of consciousness. Merleau-Ponty and others question whether Husserl here does not undercut his own position, in that Husserl had attacked in principle historicism, while specifically designing his phenomenology to be rigorous enough to transcend the limits of history. On the contrary, Husserl may be indicating here that historical traditions are merely features given to the pure ego's intuition, like any other. A longer section follows on the \"lifeworld\" [Lebenswelt], one not observed by the objective logic of science, but a world seen in our subjective experience. Yet a problem arises similar to that dealing with 'history' above, a chicken-and-egg problem. Does the lifeworld contextualize and thus compromise the gaze of the pure ego, or does the phenomenological method nonetheless raise the ego up transcendent? These last writings presented the fruits of his professional life. Since his university retirement Husserl had \"worked at a tremendous pace, producing several major works.\"", "title": "Life and career" }, { "paragraph_id": 16, "text": "After suffering a fall in the autumn of 1937, the philosopher became ill with pleurisy. Edmund Husserl died in Freiburg on 27 April 1938, having just turned 79. His wife Malvine survived him. Eugen Fink, his research assistant, delivered his eulogy. Gerhard Ritter was the only Freiburg faculty member to attend the funeral, as an anti-Nazi protest.", "title": "Life and career" }, { "paragraph_id": 17, "text": "Husserl was rumoured to have been denied the use of the library at Freiburg as a result of the anti-Jewish legislation of April 1933. However, among other disabilities Husserl was unable to publish his works in Nazi Germany [see above footnote to Die Krisis (1936)]. It was also rumoured that his former pupil Martin Heidegger informed Husserl that he was discharged, but it was actually the previous rector. Apparently Husserl and Heidegger had moved apart during the 1920s, which became clearer after 1928 when Husserl retired and Heidegger succeeded to his university chair. In the summer of 1929 Husserl had studied carefully selected writings of Heidegger, coming to the conclusion that on several of their key positions they differed: e.g., Heidegger substituted Dasein [\"Being-there\"] for the pure ego, thus transforming phenomenology into an anthropology, a type of psychologism strongly disfavored by Husserl. Such observations of Heidegger, along with a critique of Max Scheler, were put into a lecture Husserl gave to various Kant Societies in Frankfurt, Berlin, and Halle during 1931 entitled Phänomenologie und Anthropologie.", "title": "Life and career" }, { "paragraph_id": 18, "text": "In the war-time 1941 edition of Heidegger's primary work, Being and Time (Sein und Zeit, first published in 1927), the original dedication to Husserl was removed. This was not due to a negation of the relationship between the two philosophers, however, but rather was the result of a suggested censorship by Heidegger's publisher who feared that the book might otherwise be banned by the Nazi regime. The dedication can still be found in a footnote on page 38, thanking Husserl for his guidance and generosity. Husserl, of course, had died three years earlier. In post-war editions of Sein und Zeit the dedication to Husserl is restored. The complex, troubled, and sundered philosophical relationship between Husserl and Heidegger has been widely discussed.", "title": "Life and career" }, { "paragraph_id": 19, "text": "On 4 May 1933, Professor Edmund Husserl addressed the recent regime change in Germany and its consequences:", "title": "Life and career" }, { "paragraph_id": 20, "text": "The future alone will judge which was the true Germany in 1933, and who were the true Germans—those who subscribe to the more or less materialistic-mythical racial prejudices of the day, or those Germans pure in heart and mind, heirs to the great Germans of the past whose tradition they revere and perpetuate.", "title": "Life and career" }, { "paragraph_id": 21, "text": "After his death, Husserl's manuscripts, amounting to approximately 40,000 pages of \"Gabelsberger\" stenography and his complete research library, were in 1939 smuggled to the Catholic University of Louvain in Belgium by the Franciscan priest Herman Van Breda. There they were deposited at Leuven to form the Husserl-Archives of the Higher Institute of Philosophy. Much of the material in his research manuscripts has since been published in the Husserliana critical edition series.", "title": "Life and career" }, { "paragraph_id": 22, "text": "In his first works, Husserl combined mathematics, psychology, and philosophy with the goal of providing a sound foundation for mathematics. He analyzed the psychological process needed to obtain the concept of number and then built up a theory on this analysis. He used methods and concepts taken from his teachers. From Weierstrass he derived the idea of generating the concept of number by counting a certain collection of objects. From Brentano and Stumpf he took the distinction between proper and improper presenting. In an example, Husserl explained this in the following way: if you are standing in front of a house, you have a proper, direct presentation of that house, but if you are looking for it and ask for directions, then these directions (e.g. the house on the corner of this and that street) are an indirect, improper presentation. In other words, you can have a proper presentation of an object if it is actually present, and an improper (or symbolic, as he also calls it) one if you only can indicate that object through signs, symbols, etc. Husserl's Logical Investigations (1900–1901) is considered the starting point for the formal theory of wholes and their parts known as mereology.", "title": "Development of his thought" }, { "paragraph_id": 23, "text": "Another important element that Husserl took over from Brentano was intentionality, the notion that the main characteristic of consciousness is that it is always intentional. While often simplistically summarised as \"aboutness\" or the relationship between mental acts and the external world, Brentano defined it as the main characteristic of mental phenomena, by which they could be distinguished from physical phenomena. Every mental phenomenon, every psychological act, has a content, is directed at an object (the intentional object). Every belief, desire, etc. has an object that it is about: the believed, the wanted. Brentano used the expression \"intentional inexistence\" to indicate the status of the objects of thought in the mind. The property of being intentional, of having an intentional object, was the key feature to distinguish mental phenomena and physical phenomena, because physical phenomena lack intentionality altogether.", "title": "Development of his thought" }, { "paragraph_id": 24, "text": "Some years after the 1900–1901 publication of his main work, the Logische Untersuchungen (Logical Investigations), Husserl made some key conceptual elaborations which led him to assert that in order to study the structure of consciousness, one would have to distinguish between the act of consciousness and the phenomena at which it is directed (the objects as intended). Knowledge of essences would only be possible by \"bracketing\" all assumptions about the existence of an external world. This procedure he called \"epoché\". These new concepts prompted the publication of the Ideen (Ideas) in 1913, in which they were at first incorporated, and a plan for a second edition of the Logische Untersuchungen.", "title": "Development of his thought" }, { "paragraph_id": 25, "text": "From the Ideen onward, Husserl concentrated on the ideal, essential structures of consciousness. The metaphysical problem of establishing the reality of what we perceive, as distinct from the perceiving subject, was of little interest to Husserl in spite of his being a transcendental idealist. Husserl proposed that the world of objects—and of ways in which we direct ourselves toward and perceive those objects—is normally conceived of in what he called the \"natural attitude\", which is characterized by a belief that objects exist distinct from the perceiving subject and exhibit properties that we see as emanating from them (this attitude is also called physicalist objectivism). Husserl proposed a radical new phenomenological way of looking at objects by examining how we, in our many ways of being intentionally directed toward them, actually \"constitute\" them (to be distinguished from materially creating objects or objects merely being figments of the imagination); in the Phenomenological standpoint, the object ceases to be something simply \"external\" and ceases to be seen as providing indicators about what it is, and becomes a grouping of perceptual and functional aspects that imply one another under the idea of a particular object or \"type\". The notion of objects as real is not expelled by phenomenology, but \"bracketed\" as a way in which we regard objects—instead of a feature that inheres in an object's essence founded in the relation between the object and the perceiver. In order to better understand the world of appearances and objects, phenomenology attempts to identify the invariant features of how objects are perceived and pushes attributions of reality into their role as an attribution about the things we perceive (or an assumption underlying how we perceive objects). The major dividing line in Husserl's thought is the turn to transcendental idealism.", "title": "Development of his thought" }, { "paragraph_id": 26, "text": "In a later period, Husserl began to wrestle with the complicated issues of intersubjectivity, specifically, how communication about an object can be assumed to refer to the same ideal entity (Cartesian Meditations, Meditation V). Husserl tries new methods of bringing his readers to understand the importance of phenomenology to scientific inquiry (and specifically to psychology) and what it means to \"bracket\" the natural attitude. The Crisis of the European Sciences is Husserl's unfinished work that deals most directly with these issues. In it, Husserl for the first time attempts a historical overview of the development of Western philosophy and science, emphasizing the challenges presented by their increasingly one-sidedly empirical and naturalistic orientation. Husserl declares that mental and spiritual reality possess their own reality independent of any physical basis, and that a science of the mind ('Geisteswissenschaft') must be established on as scientific a foundation as the natural sciences have managed: \"It is my conviction that intentional phenomenology has for the first time made spirit as spirit the field of systematic scientific experience, thus effecting a total transformation of the task of knowledge.\"", "title": "Development of his thought" }, { "paragraph_id": 27, "text": "Husserl's thought is revolutionary in several ways, most notably in the distinction between \"natural\" and \"phenomenological\" modes of understanding. In the former, sense-perception in correspondence with the material realm constitutes the known reality, and understanding is premised on the accuracy of the perception and the objective knowability of what is called the \"real world\". Phenomenological understanding strives to be rigorously \"presuppositionless\" by means of what Husserl calls \"phenomenological reduction\". This reduction is not conditioned but rather transcendental: in Husserl's terms, pure consciousness of absolute Being. In Husserl's work, consciousness of any given thing calls for discerning its meaning as an \"intentional object\". Such an object does not simply strike the senses, to be interpreted or misinterpreted by mental reason; it has already been selected and grasped, grasping being an etymological connotation, of percipere, the root of \"perceive\".", "title": "Husserl's thought" }, { "paragraph_id": 28, "text": "From Logical Investigations (1900/1901) to Experience and Judgment (published in 1939), Husserl expressed clearly the difference between meaning and object. He identified several different kinds of names. For example, there are names that have the role of properties that uniquely identify an object. Each of these names expresses a meaning and designates the same object. Examples of this are \"the victor in Jena\" and \"the loser in Waterloo\", or \"the equilateral triangle\" and \"the equiangular triangle\"; in both cases, both names express different meanings, but designate the same object. There are names which have no meaning, but have the role of designating an object: \"Aristotle\", \"Socrates\", and so on. Finally, there are names which designate a variety of objects. These are called \"universal names\"; their meaning is a \"concept\" and refers to a series of objects (the extension of the concept). The way we know sensible objects is called \"sensible intuition\".", "title": "Husserl's thought" }, { "paragraph_id": 29, "text": "Husserl also identifies a series of \"formal words\" which are necessary to form sentences and have no sensible correlates. Examples of formal words are \"a\", \"the\", \"more than\", \"over\", \"under\", \"two\", \"group\", and so on. Every sentence must contain formal words to designate what Husserl calls \"formal categories\". There are two kinds of categories: meaning categories and formal-ontological categories. Meaning categories relate judgments; they include forms of conjunction, disjunction, forms of plural, among others. Formal-ontological categories relate objects and include notions such as set, cardinal number, ordinal number, part and whole, relation, and so on. The way we know these categories is through a faculty of understanding called \"categorial intuition\".", "title": "Husserl's thought" }, { "paragraph_id": 30, "text": "Through sensible intuition our consciousness constitutes what Husserl calls a \"situation of affairs\" (Sachlage). It is a passive constitution where objects themselves are presented to us. To this situation of affairs, through categorial intuition, we are able to constitute a \"state of affairs\" (Sachverhalt). One situation of affairs through objective acts of consciousness (acts of constituting categorially) can serve as the basis for constituting multiple states of affairs. For example, suppose a and b are two sensible objects in a certain situation of affairs. We can use it as basis to say, \"a<b\" and \"b>a\", two judgments which designate the same state of affairs. For Husserl a sentence has a proposition or judgment as its meaning, and refers to a state of affairs which has a situation of affairs as a reference base.", "title": "Husserl's thought" }, { "paragraph_id": 31, "text": "Husserl sees ontology as a science of essences. Sciences of essences are contrasted with factual sciences: the former are knowable a priori and provide the foundation for the later, which are knowable a posteriori. Ontology as a science of essences is not interested in actual facts, but in the essences themselves, whether they have instances or not. Husserl distinguishes between formal ontology, which investigates the essence of objectivity in general, and regional ontologies, which study regional essences that are shared by all entities belonging to the region. Regions correspond to the highest genera of concrete entities: material nature, personal consciousness and interpersonal spirit. Husserl's method for studying ontology and sciences of essence in general is called eidetic variation. It involves imagining an object of the kind under investigation and varying its features. The changed feature is inessential to this kind if the object can survive its change, otherwise it belongs to the kind's essence. For example, a triangle remains a triangle if one of its sides is extended but it ceases to be a triangle if a fourth side is added. Regional ontology involves applying this method to the essences corresponding to the highest genera.", "title": "Husserl's thought" }, { "paragraph_id": 32, "text": "Husserl believed that truth-in-itself has as ontological correlate being-in-itself, just as meaning categories have formal-ontological categories as correlates. Logic is a formal theory of judgment, that studies the formal a priori relations among judgments using meaning categories. Mathematics, on the other hand, is formal ontology; it studies all the possible forms of being (of objects). Hence for both logic and mathematics, the different formal categories are the objects of study, not the sensible objects themselves. The problem with the psychological approach to mathematics and logic is that it fails to account for the fact that this approach is about formal categories, and not simply about abstractions from sensibility alone. The reason why we do not deal with sensible objects in mathematics is because of another faculty of understanding called \"categorial abstraction.\" Through this faculty we are able to get rid of sensible components of judgments, and just focus on formal categories themselves.", "title": "Husserl's thought" }, { "paragraph_id": 33, "text": "Thanks to \"eidetic reduction\" (or \"essential intuition\"), we are able to grasp the possibility, impossibility, necessity and contingency among concepts and among formal categories. Categorial intuition, along with categorial abstraction and eidetic reduction, are the basis for logical and mathematical knowledge.", "title": "Husserl's thought" }, { "paragraph_id": 34, "text": "Husserl criticized the logicians of his day for not focusing on the relation between subjective processes that give us objective knowledge of pure logic. All subjective activities of consciousness need an ideal correlate, and objective logic (constituted noematically) as it is constituted by consciousness needs a noetic correlate (the subjective activities of consciousness).", "title": "Husserl's thought" }, { "paragraph_id": 35, "text": "Husserl stated that logic has three strata, each further away from consciousness and psychology than those that precede it.", "title": "Husserl's thought" }, { "paragraph_id": 36, "text": "The ontological correlate to the third stratum is the \"theory of manifolds\". In formal ontology, it is a free investigation where a mathematician can assign several meanings to several symbols, and all their possible valid deductions in a general and indeterminate manner. It is, properly speaking, the most universal mathematics of all. Through the posit of certain indeterminate objects (formal-ontological categories) as well as any combination of mathematical axioms, mathematicians can explore the apodeictic connections between them, as long as consistency is preserved.", "title": "Husserl's thought" }, { "paragraph_id": 37, "text": "According to Husserl, this view of logic and mathematics accounted for the objectivity of a series of mathematical developments of his time, such as n-dimensional manifolds (both Euclidean and non-Euclidean), Hermann Grassmann's theory of extensions, William Rowan Hamilton's Hamiltonians, Sophus Lie's theory of transformation groups, and Cantor's set theory.", "title": "Husserl's thought" }, { "paragraph_id": 38, "text": "Jacob Klein was one student of Husserl who pursued this line of inquiry, seeking to \"desedimentize\" mathematics and the mathematical sciences.", "title": "Husserl's thought" }, { "paragraph_id": 39, "text": "After obtaining his PhD in mathematics, Husserl began analyzing the foundations of mathematics from a psychological point of view. In his habilitation thesis, On the Concept of Number (1886) and in his Philosophy of Arithmetic (1891), Husserl sought, by employing Brentano's descriptive psychology, to define the natural numbers in a way that advanced the methods and techniques of Karl Weierstrass, Richard Dedekind, Georg Cantor, Gottlob Frege, and other contemporary mathematicians. Later, in the first volume of his Logical Investigations, the Prolegomena of Pure Logic, Husserl, while attacking the psychologistic point of view in logic and mathematics, also appears to reject much of his early work, although the forms of psychologism analysed and refuted in the Prolegomena did not apply directly to his Philosophy of Arithmetic. Some scholars question whether Frege's negative review of the Philosophy of Arithmetic helped turn Husserl towards modern Platonism, but he had already discovered the work of Bernard Bolzano independently around 1890/91. In his Logical Investigations, Husserl explicitly mentioned Bolzano, G. W. Leibniz and Hermann Lotze as inspirations for his newer position.", "title": "Husserl and psychologism" }, { "paragraph_id": 40, "text": "Husserl's review of Ernst Schröder, published before Frege's landmark 1892 article, clearly distinguishes sense from reference; thus Husserl's notions of noema and object also arose independently. Likewise, in his criticism of Frege in the Philosophy of Arithmetic, Husserl remarks on the distinction between the content and the extension of a concept. Moreover, the distinction between the subjective mental act, namely the content of a concept, and the (external) object, was developed independently by Brentano and his school, and may have surfaced as early as Brentano's 1870s lectures on logic.", "title": "Husserl and psychologism" }, { "paragraph_id": 41, "text": "Scholars such as J. N. Mohanty, Claire Ortiz Hill, and Guillermo E. Rosado Haddock, among others, have argued that Husserl's so-called change from psychologism to Platonism came about independently of Frege's review. For example, the review falsely accuses Husserl of subjectivizing everything, so that no objectivity is possible, and falsely attributes to him a notion of abstraction whereby objects disappear until we are left with numbers as mere ghosts. Contrary to what Frege states, in Husserl's Philosophy of Arithmetic we already find two different kinds of representations: subjective and objective. Moreover, objectivity is clearly defined in that work. Frege's attack seems to be directed at certain foundational doctrines then current in Weierstrass's Berlin School, of which Husserl and Cantor cannot be said to be orthodox representatives.", "title": "Husserl and psychologism" }, { "paragraph_id": 42, "text": "Furthermore, various sources indicate that Husserl changed his mind about psychologism as early as 1890, a year before he published the Philosophy of Arithmetic. Husserl stated that by the time he published that book, he had already changed his mind—that he had doubts about psychologism from the very outset. He attributed this change of mind to his reading of Leibniz, Bolzano, Lotze, and David Hume. Husserl makes no mention of Frege as a decisive factor in this change. In his Logical Investigations, Husserl mentions Frege only twice, once in a footnote to point out that he had retracted three pages of his criticism of Frege's The Foundations of Arithmetic, and again to question Frege's use of the word Bedeutung to designate \"reference\" rather than \"meaning\" (sense).", "title": "Husserl and psychologism" }, { "paragraph_id": 43, "text": "In a letter dated 24 May 1891, Frege thanked Husserl for sending him a copy of the Philosophy of Arithmetic and Husserl's review of Ernst Schröder's Vorlesungen über die Algebra der Logik. In the same letter, Frege used the review of Schröder's book to analyze Husserl's notion of the sense of reference of concept words. Hence Frege recognized, as early as 1891, that Husserl distinguished between sense and reference. Consequently, Frege and Husserl independently elaborated a theory of sense and reference before 1891.", "title": "Husserl and psychologism" }, { "paragraph_id": 44, "text": "Commentators argue that Husserl's notion of noema has nothing to do with Frege's notion of sense, because noemata are necessarily fused with noeses which are the conscious activities of consciousness. Noemata have three different levels:", "title": "Husserl and psychologism" }, { "paragraph_id": 45, "text": "Consequently, in intentional activities, even non-existent objects can be constituted, and form part of the whole noema. Frege, however, did not conceive of objects as forming parts of senses: If a proper name denotes a non-existent object, it does not have a reference, hence concepts with no objects have no truth value in arguments. Moreover, Husserl did not maintain that predicates of sentences designate concepts. According to Frege the reference of a sentence is a truth value; for Husserl it is a \"state of affairs.\" Frege's notion of \"sense\" is unrelated to Husserl's noema, while the latter's notions of \"meaning\" and \"object\" differ from those of Frege.", "title": "Husserl and psychologism" }, { "paragraph_id": 46, "text": "In detail, Husserl's conception of logic and mathematics differs from that of Frege, who held that arithmetic could be derived from logic. For Husserl this is not the case: mathematics (with the exception of geometry) is the ontological correlate of logic, and while both fields are related, neither one is strictly reducible to the other.", "title": "Husserl and psychologism" }, { "paragraph_id": 47, "text": "Reacting against authors such as J. S. Mill, Christoph von Sigwart and his own former teacher Brentano, Husserl criticised their psychologism in mathematics and logic, i.e. their conception of these abstract and a priori sciences as having an essentially empirical foundation and a prescriptive or descriptive nature. According to psychologism, logic would not be an autonomous discipline, but a branch of psychology, either proposing a prescriptive and practical \"art\" of correct judgement (as Brentano and some of his more orthodox students did) or a description of the factual processes of human thought. Husserl pointed out that the failure of anti-psychologists to defeat psychologism was a result of being unable to distinguish between the foundational, theoretical side of logic, and the applied, practical side. Pure logic does not deal at all with \"thoughts\" or \"judgings\" as mental episodes but about a priori laws and conditions for any theory and any judgments whatsoever, conceived as propositions in themselves.", "title": "Husserl and psychologism" }, { "paragraph_id": 48, "text": "Since \"truth-in-itself\" has \"being-in-itself\" as ontological correlate, and since psychologists reduce truth (and hence logic) to empirical psychology, the inevitable consequence is scepticism. Psychologists have also not been successful in showing how from induction or psychological processes we can justify the absolute certainty of logical principles, such as the principles of identity and non-contradiction. It is therefore futile to base certain logical laws and principles on uncertain processes of the mind.", "title": "Husserl and psychologism" }, { "paragraph_id": 49, "text": "This confusion made by psychologism (and related disciplines such as biologism and anthropologism) can be due to three specific prejudices:", "title": "Husserl and psychologism" }, { "paragraph_id": 50, "text": "1. The first prejudice is the supposition that logic is somehow normative in nature. Husserl argues that logic is theoretical, i.e., that logic itself proposes a priori laws which are themselves the basis of the normative side of logic. Since mathematics is related to logic, he cites an example from mathematics: If we have a formula like \"(a + b)(a – b) = a² – b²\" it does not tell us how to think mathematically. It just expresses a truth. A proposition that says: \"The product of the sum and the difference of a and b should give us the difference of the squares of a and b\" does express a normative proposition, but this normative statement is based on the theoretical statement \"(a + b)(a – b) = a² – b²\".", "title": "Husserl and psychologism" }, { "paragraph_id": 51, "text": "2. For psychologists, the acts of judging, reasoning, deriving, and so on, are all psychological processes. Therefore, it is the role of psychology to provide the foundation of these processes. Husserl states that this effort made by psychologists is a \"metábasis eis állo génos\" (Gr. μετάβασις εἰς ἄλλο γένος, \"a transgression to another field\"). It is a metábasis because psychology cannot provide any foundations for a priori laws which themselves are the basis for all the ways we should think correctly. Psychologists have the problem of confusing intentional activities with the object of these activities. It is important to distinguish between the act of judging and the judgment itself, the act of counting and the number itself, and so on. Counting five objects is undeniably a psychological process, but the number 5 is not.", "title": "Husserl and psychologism" }, { "paragraph_id": 52, "text": "3. Judgments can be true or not true. Psychologists argue that judgments are true because they become \"evidently\" true to us. This evidence, a psychological process that \"guarantees\" truth, is indeed a psychological process. Husserl responds by saying that truth itself, as well as logical laws, always remain valid regardless of psychological \"evidence\" that they are true. No psychological process can explain the a priori objectivity of these logical truths.", "title": "Husserl and psychologism" }, { "paragraph_id": 53, "text": "From this criticism to psychologism, the distinction between psychological acts and their intentional objects, and the difference between the normative side of logic and the theoretical side, derives from a Platonist conception of logic. This means that we should regard logical and mathematical laws as being independent of the human mind, and also as an autonomy of meanings. It is essentially the difference between the real (everything subject to time) and the ideal or irreal (everything that is atemporal), such as logical truths, mathematical entities, mathematical truths and meanings in general.", "title": "Husserl and psychologism" }, { "paragraph_id": 54, "text": "David Carr commented on Husserl's following in his 1970 dissertation at Yale: \"It is well known that Husserl was always disappointed at the tendency of his students to go their own way, to embark upon fundamental revisions of phenomenology rather than engage in the communal task\" as originally intended by the radical new science. Notwithstanding, he did attract philosophers to phenomenology.", "title": "Influence" }, { "paragraph_id": 55, "text": "Martin Heidegger is the best known of Husserl's students, the one whom Husserl chose as his successor at Freiburg. Heidegger's magnum opus Being and Time was dedicated to Husserl. They shared their thoughts and worked alongside each other for over a decade at the University of Freiburg, Heidegger being Husserl's assistant during 1920–1923. Heidegger's early work followed his teacher, but with time he began to develop new insights distinctively variant. Husserl became increasingly critical of Heidegger's work, especially in 1929, and included pointed criticism of Heidegger in lectures he gave during 1931. Heidegger, while acknowledging his debt to Husserl, followed a political position offensive and harmful to Husserl after the Nazis came to power in 1933, Husserl being of Jewish origin and Heidegger infamously being then a Nazi proponent. Academic discussion of Husserl and Heidegger is extensive.", "title": "Influence" }, { "paragraph_id": 56, "text": "At Göttingen in 1913 Adolf Reinach (1884–1917) \"was now Husserl's right hand. He was above all the mediator between Husserl and the students, for he understood extremely well how to deal with other persons, whereas Husserl was pretty much helpless in this respect.\" He was an original editor of Husserl's new journal, Jahrbuch; one of his works (giving a phenomenological analysis of the law of obligations) appeared in its first issue. Reinach was widely admired and a remarkable teacher. Husserl, in his 1917 obituary, wrote, \"He wanted to draw only from the deepest sources, he wanted to produce only work of enduring value. And through his wise restraint he succeeded in this.\"", "title": "Influence" }, { "paragraph_id": 57, "text": "Edith Stein was Husserl's student at Göttingen and Freiburg while she wrote her doctoral thesis The Empathy Problem as it Developed Historically and Considered Phenomenologically (1916). She then became his assistant at Freiburg in 1916–18. She later adapted her phenomenology to the modern school of modern Thomism.", "title": "Influence" }, { "paragraph_id": 58, "text": "Ludwig Landgrebe became assistant to Husserl in 1923. From 1939 he collaborated with Eugen Fink at the Husserl-Archives in Leuven. In 1954 he became leader of the Husserl-Archives. Landgrebe is known as one of Husserl's closest associates, but also for his independent views relating to history, religion and politics as seen from the viewpoints of existentialist philosophy and metaphysics.", "title": "Influence" }, { "paragraph_id": 59, "text": "Eugen Fink was a close associate of Husserl during the 1920s and 1930s. He wrote the Sixth Cartesian Meditation which Husserl said was the truest expression and continuation of his own work. Fink delivered the eulogy for Husserl in 1938.", "title": "Influence" }, { "paragraph_id": 60, "text": "Roman Ingarden, an early student of Husserl at Freiburg, corresponded with Husserl into the mid-1930s. Ingarden did not accept, however, the later transcendental idealism of Husserl which he thought would lead to relativism. Ingarden has written his work in German and Polish. In his Spór o istnienie świata (Ger.: \"Der Streit um die Existenz der Welt\", Eng.: \"Dispute about existence of the world\") he created his own realistic position, which also helped to spread phenomenology in Poland.", "title": "Influence" }, { "paragraph_id": 61, "text": "Max Scheler met Husserl in Halle in 1901 and found in his phenomenology a methodological breakthrough for his own philosophy. Scheler, who was at Göttingen when Husserl taught there, was one of the original few editors of the journal Jahrbuch für Philosophie und Phänomenologische Forschung (1913). Scheler's work Formalism in Ethics and Nonformal Ethics of Value appeared in the new journal (1913 and 1916) and drew acclaim. The personal relationship between the two men, however, became strained, due to Scheler's legal troubles, and Scheler returned to Munich. Although Scheler later criticised Husserl's idealistic logical approach and proposed instead a \"phenomenology of love\", he states that he remained \"deeply indebted\" to Husserl throughout his work.", "title": "Influence" }, { "paragraph_id": 62, "text": "Nicolai Hartmann was once thought to be at the center of phenomenology, but perhaps no longer. In 1921 the prestige of Hartmann the Neo-Kantian, who was Professor of Philosophy at Marburg, was added to the Movement; he \"publicly declared his solidarity with the actual work of die Phänomenologie.\" Yet Hartmann's connections were with Max Scheler and the Munich circle; Husserl himself evidently did not consider him as a phenomenologist. His philosophy, however, is said to include an innovative use of the method.", "title": "Influence" }, { "paragraph_id": 63, "text": "Emmanuel Levinas in 1929 gave a presentation at one of Husserl's last seminars in Freiburg. Also that year he wrote on Husserl's Ideen (1913) a long review published by a French journal. With Gabrielle Peiffer, Levinas translated into French Husserl's Méditations cartésiennes (1931). He was at first impressed with Heidegger and began a book on him, but broke off the project when Heidegger became involved with the Nazis. After the war he wrote on Jewish spirituality; most of his family had been murdered by the Nazis in Lithuania. Levinas then began to write works that would become widely known and admired.", "title": "Influence" }, { "paragraph_id": 64, "text": "Alfred Schutz's Phenomenology of the Social World seeks to rigorously ground Max Weber's interpretive sociology in Husserl's phenomenology. Husserl was impressed by this work and asked Schutz to be his assistant.", "title": "Influence" }, { "paragraph_id": 65, "text": "Jean-Paul Sartre was also largely influenced by Husserl, although he later came to disagree with key points in his analyses. Sartre rejected Husserl's transcendental interpretations begun in his Ideen (1913) and instead followed Heidegger's ontology.", "title": "Influence" }, { "paragraph_id": 66, "text": "Maurice Merleau-Ponty's Phenomenology of Perception is influenced by Edmund Husserl's work on perception, intersubjectivity, intentionality, space, and temporality, including Husserl's theory of retention and protention. Merleau-Ponty's description of 'motor intentionality' and sexuality, for example, retain the important structure of the noetic/noematic correlation of Ideen I, yet further concretize what it means for Husserl when consciousness particularizes itself into modes of intuition. Merleau-Ponty's most clearly Husserlian work is, perhaps, \"the Philosopher and His Shadow.\" Depending on the interpretation of Husserl's accounts of eidetic intuition, given in Husserl's Phenomenological Psychology and Experience and Judgment, it may be that Merleau-Ponty did not accept the \"eidetic reduction\" nor the \"pure essence\" said to result. Merleau-Ponty was the first student to study at the Husserl-archives in Leuven.", "title": "Influence" }, { "paragraph_id": 67, "text": "Gabriel Marcel explicitly rejected existentialism, due to Sartre, but not phenomenology, which has enjoyed a wide following among French Catholics. He appreciated Husserl, Scheler, and (but with apprehension) Heidegger. His expressions like \"ontology of sensability\" when referring to the body, indicate influence by phenomenological thought.", "title": "Influence" }, { "paragraph_id": 68, "text": "Kurt Gödel is known to have read Cartesian Meditations. He expressed very strong appreciation for Husserl's work, especially with regard to \"bracketing\" or \"epoché\".", "title": "Influence" }, { "paragraph_id": 69, "text": "Hermann Weyl's interest in intuitionistic logic and impredicativity appears to have resulted from his reading of Husserl. He was introduced to Husserl's work through his wife, Helene Joseph, herself a student of Husserl at Göttingen.", "title": "Influence" }, { "paragraph_id": 70, "text": "Colin Wilson has used Husserl's ideas extensively in developing his \"New Existentialism,\" particularly in regards to his \"intentionality of consciousness,\" which he mentions in a number of his books.", "title": "Influence" }, { "paragraph_id": 71, "text": "Rudolf Carnap was also influenced by Husserl, not only concerning Husserl's notion of essential insight that Carnap used in his Der Raum, but also his notion of \"formation rules\" and \"transformation rules\" is founded on Husserl's philosophy of logic.", "title": "Influence" }, { "paragraph_id": 72, "text": "Karol Wojtyla, who would later become Pope John Paul II, was influenced by Husserl. Phenomenology appears in his major work, The Acting Person (1969). Originally published in Polish, it was translated by Andrzej Potocki and edited by Anna-Teresa Tymieniecka in the Analecta Husserliana. The Acting Person combines phenomenological work with Thomistic ethics.", "title": "Influence" }, { "paragraph_id": 73, "text": "Paul Ricœur has translated many works of Husserl into French and has also written many of his own studies of the philosopher. Among other works, Ricœur employed phenomenology in his Freud and Philosophy (1965).", "title": "Influence" }, { "paragraph_id": 74, "text": "Jacques Derrida wrote several critical studies of Husserl early in his academic career. These included his dissertation, The Problem of Genesis in Husserl's Philosophy, and also his introduction to The Origin of Geometry. Derrida continued to make reference to Husserl in works such as Of Grammatology.", "title": "Influence" }, { "paragraph_id": 75, "text": "Stanisław Leśniewski and Kazimierz Ajdukiewicz were inspired by Husserl's formal analysis of language. Accordingly, they employed phenomenology in the development of categorial grammar.", "title": "Influence" }, { "paragraph_id": 76, "text": "José Ortega y Gasset visited Husserl at Freiburg in 1934. He credited phenomenology for having 'liberated him' from a narrow neo-Kantian thought. While perhaps not a phenomenologist himself, he introduced the philosophy to Iberia and Latin America.", "title": "Influence" }, { "paragraph_id": 77, "text": "Wilfrid Sellars, an influential figure in the so-called \"Pittsburgh School\" (Robert Brandom, John McDowell) had been a student of Marvin Farber, a pupil of Husserl, and was influenced by phenomenology through him:", "title": "Influence" }, { "paragraph_id": 78, "text": "Marvin Farber led me through my first careful reading of the Critique of Pure Reason and introduced me to Husserl. His combination of utter respect for the structure of Husserl's thought with the equally firm conviction that this structure could be given a naturalistic interpretation was undoubtedly a key influence on my own subsequent philosophical strategy.", "title": "Influence" }, { "paragraph_id": 79, "text": "In his 1942 essay The Myth of Sisyphus, absurdist philosopher Albert Camus acknowledges Husserl as a previous philosopher who described and attempted to deal with the feeling of the absurd, but claims he committed \"philosophical suicide\" by elevating reason and ultimately arriving at ubiquitous Platonic forms and an abstract god.", "title": "Influence" }, { "paragraph_id": 80, "text": "Hans Blumenberg received his habilitation in 1950, with a dissertation on ontological distance, an inquiry into the crisis of Husserl's phenomenology.", "title": "Influence" }, { "paragraph_id": 81, "text": "Roger Scruton, despite some disagreements with Husserl, drew upon his work in Sexual Desire (1986).", "title": "Influence" }, { "paragraph_id": 82, "text": "The influence of the Husserlian phenomenological tradition in the 21st century extends beyond the confines of the European and North American legacies. It has already started to impact (indirectly) scholarship in Eastern and Oriental thought, including research on the impetus of philosophical thinking in the history of ideas in Islam.", "title": "Influence" } ]
Edmund Gustav Albrecht Husserl was an Austrian-German philosopher and mathematician who established the school of phenomenology. In his early work, he elaborated critiques of historicism and of psychologism in logic based on analyses of intentionality. In his mature work, he sought to develop a systematic foundational science based on the so-called phenomenological reduction. Arguing that transcendental consciousness sets the limits of all possible knowledge, Husserl redefined phenomenology as a transcendental-idealist philosophy. Husserl's thought profoundly influenced 20th-century philosophy, and he remains a notable figure in contemporary philosophy and beyond. Husserl studied mathematics, taught by Karl Weierstrass and Leo Königsberger, and philosophy taught by Franz Brentano and Carl Stumpf. He taught philosophy as a Privatdozent at Halle from 1887, then as professor, first at Göttingen from 1901, then at Freiburg from 1916 until he retired in 1928, after which he remained highly productive. In 1933, under racial laws of the Nazi Party, Husserl was expelled from the library of the University of Freiburg due to his Jewish family background and months later resigned from the Deutsche Akademie. Following an illness, he died in Freiburg in 1938.
2001-06-29T07:54:09Z
2023-12-27T22:53:08Z
[ "Template:ISBN", "Template:Short description", "Template:Use dmy dates", "Template:Cite American Heritage Dictionary", "Template:Cite Merriam-Webster", "Template:IPAc-en", "Template:Emdash", "Template:Cite dictionary", "Template:Library resources box", "Template:Continental philosophy", "Template:Philosophy of mind", "Template:Authority control", "Template:Further", "Template:Citation", "Template:Edmund Husserl", "Template:Existentialism", "Template:Infobox philosopher", "Template:Citation needed", "Template:Dead link", "Template:Commons category", "Template:Respell", "Template:Notelist", "Template:Cite web", "Template:Cite book", "Template:Wikiquote", "Template:Internet Archive author", "Template:Cite SEP", "Template:IPA-de", "Template:Blockquote", "Template:Platonists", "Template:Rp", "Template:Reflist", "Template:Cbignore", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Edmund_Husserl
9,531
Electrical engineering
Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems which use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the latter half of the 19th century after the commercialization of the electric telegraph, the telephone, and electrical power generation, distribution, and use. Electrical engineering is now divided into a wide range of different fields, including computer engineering, systems engineering, power engineering, telecommunications, radio-frequency engineering, signal processing, instrumentation, photovoltaic cells, electronics, and optics and photonics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations including hardware engineering, power electronics, electromagnetics and waves, microwave engineering, nanotechnology, electrochemistry, renewable energies, mechatronics/control, and electrical materials science. Electrical engineers typically hold a degree in electrical engineering or electronic engineering. Practising engineers may have professional certification and be members of a professional body or an international standards organization. These include the International Electrotechnical Commission (IEC), the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET, formerly the IEE). Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from circuit theory to the management skills of a project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to sophisticated design and manufacturing software. Electricity has been a subject of scientific interest since at least the early 17th century. William Gilbert was a prominent early electrical scientist, and was the first to draw a clear distinction between magnetism and static electricity. He is credited with establishing the term "electricity". He also designed the versorium: a device that detects the presence of statically charged objects. In 1762 Swedish professor Johan Wilcke invented a device later named electrophorus that produced a static electric charge. By 1800 Alessandro Volta had developed the voltaic pile, a forerunner of the electric battery. In the 19th century, research into the subject started to intensify. Notable developments in this century include the work of Hans Christian Ørsted, who discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle; of William Sturgeon, who in 1825 invented the electromagnet; of Joseph Henry and Edward Davy, who invented the electrical relay in 1835; of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor; of Michael Faraday, the discoverer of electromagnetic induction in 1831; and of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism. In 1782, Georges-Louis Le Sage developed and presented in Berlin probably the world's first form of electric telegraphy, using 24 different wires, one for each letter of the alphabet. This telegraph connected two rooms. It was an electrostatic telegraph that moved gold leaf through electrical conduction. In 1795, Francisco Salva Campillo proposed an electrostatic telegraph system. Between 1803 and 1804, he worked on electrical telegraphy, and in 1804, he presented his report at the Royal Academy of Natural Sciences and Arts of Barcelona. Salva's electrolyte telegraph system was very innovative though it was greatly influenced by and based upon two discoveries made in Europe in 1800—Alessandro Volta's electric battery for generating an electric current and William Nicholson and Anthony Carlyle's electrolysis of water. Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the later 19th century. Practitioners had created a global electric telegraph network, and the first professional electrical engineering institutions were founded in the UK and the US to support the new discipline. Francis Ronalds created an electric telegraph system in 1816 and documented his vision of how the world could be transformed by electricity. Over 50 years later, he joined the new Society of Telegraph Engineers (soon to be renamed the Institution of Electrical Engineers) where he was regarded by other members as the first of their cohort. By the end of the 19th century, the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, and, from about 1890, wireless telegraphy. Practical applications and advances in such fields created an increasing need for standardized units of measure. They led to the international standardization of the units volt, ampere, coulomb, ohm, farad, and henry. This was achieved at an international conference in Chicago in 1893. The publication of these standards formed the basis of future advances in standardization in various industries, and in many countries, the definitions were immediately recognized in relevant legislation. During these years, the study of electricity was largely considered to be a subfield of physics since early electrical technology was considered electromechanical in nature. The Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882 and introduced the first-degree course in electrical engineering in 1883. The first electrical engineering degree program in the United States was started at Massachusetts Institute of Technology (MIT) in the physics department under Professor Charles Cross, though it was Cornell University to produce the world's first electrical engineering graduates in 1885. The first course in electrical engineering was taught in 1883 in Cornell's Sibley College of Mechanical Engineering and Mechanic Arts. In about 1885 Cornell President Andrew Dickson White established the first Department of Electrical Engineering in the United States. In the same year, University College London founded the first chair of electrical engineering in Great Britain. Professor Mendell P. Weinbach at University of Missouri established the electrical engineering department in 1886. Afterwards, universities and institutes of technology gradually started to offer electrical engineering programs to their students all over the world. During these decades the use of electrical engineering increased dramatically. In 1882, Thomas Edison switched on the world's first large-scale electric power network that provided 110 volts—direct current (DC)—to 59 customers on Manhattan Island in New York City. In 1884, Sir Charles Parsons invented the steam turbine allowing for more efficient electric power generation. Alternating current, with its ability to transmit power more efficiently over long distances via the use of transformers, developed rapidly in the 1880s and 1890s with transformer designs by Károly Zipernowsky, Ottó Bláthy and Miksa Déri (later called ZBD transformers), Lucien Gaulard, John Dixon Gibbs and William Stanley, Jr. Practical AC motor designs including induction motors were independently invented by Galileo Ferraris and Nikola Tesla and further developed into a practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering. The spread in the use of AC set off in the United States what has been called the war of the currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard. During the development of radio, many scientists and inventors contributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including the possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them. In 1895, Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these "Hertzian waves" into a purpose built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of 2,100 miles (3,400 km). Millimetre wave communication was first investigated by Jagadish Chandra Bose during 1894–1896, when he reached an extremely high frequency of up to 60 GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901. In 1897, Karl Ferdinand Braun introduced the cathode-ray tube as part of an oscilloscope, a crucial enabling technology for electronic television. John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode. In 1920, Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer. In 1934, the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936. In 1941, Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943, Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives. In 1948 Claude Shannon published "A Mathematical Theory of Communication" which mathematically describes the passage of information with uncertainty (electrical noise). The first working transistor was a point-contact transistor invented by John Bardeen and Walter Houser Brattain while working under William Shockley at the Bell Telephone Laboratories (BTL) in 1947. They then invented the bipolar junction transistor in 1948. While early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, they opened the door for more compact devices. The first integrated circuits were the hybrid integrated circuit invented by Jack Kilby at Texas Instruments in 1958 and the monolithic integrated circuit chip invented by Robert Noyce at Fairchild Semiconductor in 1959. The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at BTL in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET made it possible to build high-density integrated circuit chips. The earliest experimental MOS IC chip to be fabricated was built by Fred Heiman and Steven Hofstein at RCA Laboratories in 1962. MOS technology enabled Moore's law, the doubling of transistors on an IC chip every two years, predicted by Gordon Moore in 1965. Silicon-gate MOS technology was developed by Federico Faggin at Fairchild in 1968. Since then, the MOSFET has been the basic building block of modern electronics. The mass-production of silicon MOSFETs and MOS integrated circuit chips, along with continuous MOSFET scaling miniaturization at an exponential pace (as predicted by Moore's law), has since led to revolutionary changes in technology, economy, culture and thinking. The Apollo program which culminated in landing astronauts on the Moon with Apollo 11 in 1969 was enabled by NASA's adoption of advances in semiconductor electronic technology, including MOSFETs in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC). The development of MOS integrated circuit technology in the 1960s led to the invention of the microprocessor in the early 1970s. The first single-chip microprocessor was the Intel 4004, released in 1971. The Intel 4004 was designed and realized by Federico Faggin at Intel with his silicon-gate MOS technology, along with Intel's Marcian Hoff and Stanley Mazor and Busicom's Masatoshi Shima. The microprocessor led to the development of microcomputers and personal computers, and the microcomputer revolution. One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right. Power & Energy engineering deals with the generation, transmission, and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid, or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems. Telecommunications engineering focuses on the transmission of information across a communication channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier signal to shift the information to a carrier frequency suitable for transmission; this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer. Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. Typically, if the power of the transmitted signal is insufficient once the signal arrives at the receiver's antenna(s), the information contained in the signal will be corrupted by noise, specifically static. Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers, electronics control engineers may use electronic circuits, digital signal processors, microcontrollers, and programmable logic controllers (PLCs). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation. Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries. Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example to research is a pneumatic signal conditioner. Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio, and early television. Later, in post-war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers, and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering. Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today. Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level. Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002. Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics. Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals. Signal processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems. DSP processor ICs are found in many types of modern electronic devices, such as digital television sets, radios, hi-fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. In such products, DSP may be responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using GPS, and other kinds of image processing, video processing, audio processing, and speech processing. Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points. Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control. Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of embedded devices including video game consoles and DVD players. Computer engineers are involved in many hardware and software aspects of computing. Robots are one of the applications of computer engineering. Photonics and optics deals with the generation, transmission, amplification, modulation, detection, and analysis of electromagnetic radiation. The application of optics deals with design of optical instruments such as lenses, microscopes, telescopes, and other equipment that uses the properties of electromagnetic radiation. Other prominent applications of optics include electro-optical sensors and measurement systems, lasers, fiber optic communication systems, and optical disc systems (e.g. CD and DVD). Photonics builds heavily on optical technology, supplemented with modern developments such as optoelectronics (mostly involving semiconductors), laser systems, optical amplifiers and novel materials (e.g. metamaterials). Mechatronics is an engineering discipline which deals with the convergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various subsystems of aircraft and automobiles. Electronic systems design is the subject within electrical engineering that deals with the multi-disciplinary design issues of complex electrical and mechanical systems. The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already, such small devices, known as microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication. In aerospace engineering and robotics, an example is the most recent electric propulsion and ion propulsion. Electrical engineers typically possess an academic degree with a major in electrical engineering, electronics engineering, electrical engineering technology, or electrical and electronic engineering. The same fundamental principles are taught in all programs, though emphasis may vary according to title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics Engineering Technology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science, depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering. Initially such topics cover most, if not all, of the subdisciplines of electrical engineering. At some schools, the students can then choose to emphasize one or more subdisciplines towards the end of their courses of study. At many schools, electronic engineering is included as part of an electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others, electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered. Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (MEng/MSc), a Master of Engineering Management, a Doctor of Philosophy (PhD) in Engineering, an Engineering Doctorate (Eng.D.), or an Engineer's degree. The master's and engineer's degrees may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and some other European countries, Master of Engineering is often considered to be an undergraduate degree of slightly longer duration than the Bachelor of Engineering rather than a standalone postgraduate degree. In most countries, a bachelor's degree in engineering represents the first step towards professional certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada and South Africa), Chartered engineer or Incorporated Engineer (in India, Pakistan, the United Kingdom, Ireland and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union). The advantages of licensure vary depending upon location. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations, such as building codes and legislation pertaining to environmental law. Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide and holds over 3,000 conferences annually. The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe. Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participation in technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. An MIET(Member of the Institution of Engineering and Technology) is recognised in Europe as an Electrical and computer (technology) engineer. In Australia, Canada, and the United States electrical engineers make up around 0.25% of the labor force. From the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test, and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunication systems, the operation of electric power stations, the lighting and wiring of buildings, the design of household appliances, or the electrical control of industrial machinery. Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others. Although most electrical engineers will understand basic circuit theory (that is the interactions of elements such as resistors, capacitors, diodes, transistors, and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunication systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy, and the ability to understand the technical language and concepts that relate to electrical engineering. A wide range of instrumentation is used by electrical engineers. For simple control circuits and alarms, a basic multimeter measuring voltage, current, and resistance may suffice. Where time-varying signals need to be studied, the oscilloscope is also an ubiquitous instrument. In RF engineering and high frequency telecommunications, spectrum analyzers and network analyzers are used. In some disciplines, safety can be a particular concern with instrumentation. For instance, medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal body fluids. Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different. Many disciplines of electrical engineering use tests specific to their discipline. Audio electronics engineers use audio test sets consisting of a signal generator and a meter, principally to measure level but also other parameters such as harmonic distortion and noise. Likewise, information technology have their own test sets, often specific to a particular data format, and the same is true of television broadcasting. For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important. The workplaces of engineers are just as varied as the types of work they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, on board a Naval ship, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers, and other engineers. Electrical engineering has an intimate relationship with the physical sciences. For instance, the physicist Lord Kelvin played a major role in the engineering of the first transatlantic telegraph cable. Conversely, the engineer Oliver Heaviside produced major work on the mathematics of transmission on telegraph cables. Electrical engineers are often required on major science projects. For instance, large particle accelerators such as CERN need electrical engineers to deal with many aspects of the project including the power distribution, the instrumentation, and the manufacture and installation of the superconducting electromagnets.
[ { "paragraph_id": 0, "text": "Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems which use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the latter half of the 19th century after the commercialization of the electric telegraph, the telephone, and electrical power generation, distribution, and use.", "title": "" }, { "paragraph_id": 1, "text": "Electrical engineering is now divided into a wide range of different fields, including computer engineering, systems engineering, power engineering, telecommunications, radio-frequency engineering, signal processing, instrumentation, photovoltaic cells, electronics, and optics and photonics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations including hardware engineering, power electronics, electromagnetics and waves, microwave engineering, nanotechnology, electrochemistry, renewable energies, mechatronics/control, and electrical materials science.", "title": "" }, { "paragraph_id": 2, "text": "Electrical engineers typically hold a degree in electrical engineering or electronic engineering. Practising engineers may have professional certification and be members of a professional body or an international standards organization. These include the International Electrotechnical Commission (IEC), the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET, formerly the IEE).", "title": "" }, { "paragraph_id": 3, "text": "Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from circuit theory to the management skills of a project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to sophisticated design and manufacturing software.", "title": "" }, { "paragraph_id": 4, "text": "Electricity has been a subject of scientific interest since at least the early 17th century. William Gilbert was a prominent early electrical scientist, and was the first to draw a clear distinction between magnetism and static electricity. He is credited with establishing the term \"electricity\". He also designed the versorium: a device that detects the presence of statically charged objects. In 1762 Swedish professor Johan Wilcke invented a device later named electrophorus that produced a static electric charge. By 1800 Alessandro Volta had developed the voltaic pile, a forerunner of the electric battery.", "title": "History" }, { "paragraph_id": 5, "text": "In the 19th century, research into the subject started to intensify. Notable developments in this century include the work of Hans Christian Ørsted, who discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle; of William Sturgeon, who in 1825 invented the electromagnet; of Joseph Henry and Edward Davy, who invented the electrical relay in 1835; of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor; of Michael Faraday, the discoverer of electromagnetic induction in 1831; and of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism.", "title": "History" }, { "paragraph_id": 6, "text": "In 1782, Georges-Louis Le Sage developed and presented in Berlin probably the world's first form of electric telegraphy, using 24 different wires, one for each letter of the alphabet. This telegraph connected two rooms. It was an electrostatic telegraph that moved gold leaf through electrical conduction.", "title": "History" }, { "paragraph_id": 7, "text": "In 1795, Francisco Salva Campillo proposed an electrostatic telegraph system. Between 1803 and 1804, he worked on electrical telegraphy, and in 1804, he presented his report at the Royal Academy of Natural Sciences and Arts of Barcelona. Salva's electrolyte telegraph system was very innovative though it was greatly influenced by and based upon two discoveries made in Europe in 1800—Alessandro Volta's electric battery for generating an electric current and William Nicholson and Anthony Carlyle's electrolysis of water. Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the later 19th century. Practitioners had created a global electric telegraph network, and the first professional electrical engineering institutions were founded in the UK and the US to support the new discipline. Francis Ronalds created an electric telegraph system in 1816 and documented his vision of how the world could be transformed by electricity. Over 50 years later, he joined the new Society of Telegraph Engineers (soon to be renamed the Institution of Electrical Engineers) where he was regarded by other members as the first of their cohort. By the end of the 19th century, the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, and, from about 1890, wireless telegraphy.", "title": "History" }, { "paragraph_id": 8, "text": "Practical applications and advances in such fields created an increasing need for standardized units of measure. They led to the international standardization of the units volt, ampere, coulomb, ohm, farad, and henry. This was achieved at an international conference in Chicago in 1893. The publication of these standards formed the basis of future advances in standardization in various industries, and in many countries, the definitions were immediately recognized in relevant legislation.", "title": "History" }, { "paragraph_id": 9, "text": "During these years, the study of electricity was largely considered to be a subfield of physics since early electrical technology was considered electromechanical in nature. The Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882 and introduced the first-degree course in electrical engineering in 1883. The first electrical engineering degree program in the United States was started at Massachusetts Institute of Technology (MIT) in the physics department under Professor Charles Cross, though it was Cornell University to produce the world's first electrical engineering graduates in 1885. The first course in electrical engineering was taught in 1883 in Cornell's Sibley College of Mechanical Engineering and Mechanic Arts.", "title": "History" }, { "paragraph_id": 10, "text": "In about 1885 Cornell President Andrew Dickson White established the first Department of Electrical Engineering in the United States. In the same year, University College London founded the first chair of electrical engineering in Great Britain. Professor Mendell P. Weinbach at University of Missouri established the electrical engineering department in 1886. Afterwards, universities and institutes of technology gradually started to offer electrical engineering programs to their students all over the world.", "title": "History" }, { "paragraph_id": 11, "text": "During these decades the use of electrical engineering increased dramatically. In 1882, Thomas Edison switched on the world's first large-scale electric power network that provided 110 volts—direct current (DC)—to 59 customers on Manhattan Island in New York City. In 1884, Sir Charles Parsons invented the steam turbine allowing for more efficient electric power generation. Alternating current, with its ability to transmit power more efficiently over long distances via the use of transformers, developed rapidly in the 1880s and 1890s with transformer designs by Károly Zipernowsky, Ottó Bláthy and Miksa Déri (later called ZBD transformers), Lucien Gaulard, John Dixon Gibbs and William Stanley, Jr. Practical AC motor designs including induction motors were independently invented by Galileo Ferraris and Nikola Tesla and further developed into a practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering. The spread in the use of AC set off in the United States what has been called the war of the currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard.", "title": "History" }, { "paragraph_id": 12, "text": "During the development of radio, many scientists and inventors contributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including the possibility of invisible airborne waves (later called \"radio waves\"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them. In 1895, Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these \"Hertzian waves\" into a purpose built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of 2,100 miles (3,400 km).", "title": "History" }, { "paragraph_id": 13, "text": "Millimetre wave communication was first investigated by Jagadish Chandra Bose during 1894–1896, when he reached an extremely high frequency of up to 60 GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901.", "title": "History" }, { "paragraph_id": 14, "text": "In 1897, Karl Ferdinand Braun introduced the cathode-ray tube as part of an oscilloscope, a crucial enabling technology for electronic television. John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode.", "title": "History" }, { "paragraph_id": 15, "text": "In 1920, Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer. In 1934, the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936.", "title": "History" }, { "paragraph_id": 16, "text": "In 1941, Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943, Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives.", "title": "History" }, { "paragraph_id": 17, "text": "In 1948 Claude Shannon published \"A Mathematical Theory of Communication\" which mathematically describes the passage of information with uncertainty (electrical noise).", "title": "History" }, { "paragraph_id": 18, "text": "The first working transistor was a point-contact transistor invented by John Bardeen and Walter Houser Brattain while working under William Shockley at the Bell Telephone Laboratories (BTL) in 1947. They then invented the bipolar junction transistor in 1948. While early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, they opened the door for more compact devices.", "title": "History" }, { "paragraph_id": 19, "text": "The first integrated circuits were the hybrid integrated circuit invented by Jack Kilby at Texas Instruments in 1958 and the monolithic integrated circuit chip invented by Robert Noyce at Fairchild Semiconductor in 1959.", "title": "History" }, { "paragraph_id": 20, "text": "The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at BTL in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. It revolutionized the electronics industry, becoming the most widely used electronic device in the world.", "title": "History" }, { "paragraph_id": 21, "text": "The MOSFET made it possible to build high-density integrated circuit chips. The earliest experimental MOS IC chip to be fabricated was built by Fred Heiman and Steven Hofstein at RCA Laboratories in 1962. MOS technology enabled Moore's law, the doubling of transistors on an IC chip every two years, predicted by Gordon Moore in 1965. Silicon-gate MOS technology was developed by Federico Faggin at Fairchild in 1968. Since then, the MOSFET has been the basic building block of modern electronics. The mass-production of silicon MOSFETs and MOS integrated circuit chips, along with continuous MOSFET scaling miniaturization at an exponential pace (as predicted by Moore's law), has since led to revolutionary changes in technology, economy, culture and thinking.", "title": "History" }, { "paragraph_id": 22, "text": "The Apollo program which culminated in landing astronauts on the Moon with Apollo 11 in 1969 was enabled by NASA's adoption of advances in semiconductor electronic technology, including MOSFETs in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC).", "title": "History" }, { "paragraph_id": 23, "text": "The development of MOS integrated circuit technology in the 1960s led to the invention of the microprocessor in the early 1970s. The first single-chip microprocessor was the Intel 4004, released in 1971. The Intel 4004 was designed and realized by Federico Faggin at Intel with his silicon-gate MOS technology, along with Intel's Marcian Hoff and Stanley Mazor and Busicom's Masatoshi Shima. The microprocessor led to the development of microcomputers and personal computers, and the microcomputer revolution.", "title": "History" }, { "paragraph_id": 24, "text": "One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right.", "title": "Subfields" }, { "paragraph_id": 25, "text": "Power & Energy engineering deals with the generation, transmission, and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid, or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems.", "title": "Subfields" }, { "paragraph_id": 26, "text": "Telecommunications engineering focuses on the transmission of information across a communication channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier signal to shift the information to a carrier frequency suitable for transmission; this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer.", "title": "Subfields" }, { "paragraph_id": 27, "text": "Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. Typically, if the power of the transmitted signal is insufficient once the signal arrives at the receiver's antenna(s), the information contained in the signal will be corrupted by noise, specifically static.", "title": "Subfields" }, { "paragraph_id": 28, "text": "Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers, electronics control engineers may use electronic circuits, digital signal processors, microcontrollers, and programmable logic controllers (PLCs). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation.", "title": "Subfields" }, { "paragraph_id": 29, "text": "Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback.", "title": "Subfields" }, { "paragraph_id": 30, "text": "Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries.", "title": "Subfields" }, { "paragraph_id": 31, "text": "Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example to research is a pneumatic signal conditioner.", "title": "Subfields" }, { "paragraph_id": 32, "text": "Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio, and early television. Later, in post-war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers, and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering.", "title": "Subfields" }, { "paragraph_id": 33, "text": "Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today.", "title": "Subfields" }, { "paragraph_id": 34, "text": "Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level.", "title": "Subfields" }, { "paragraph_id": 35, "text": "Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002.", "title": "Subfields" }, { "paragraph_id": 36, "text": "Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics.", "title": "Subfields" }, { "paragraph_id": 37, "text": "Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals.", "title": "Subfields" }, { "paragraph_id": 38, "text": "Signal processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems.", "title": "Subfields" }, { "paragraph_id": 39, "text": "DSP processor ICs are found in many types of modern electronic devices, such as digital television sets, radios, hi-fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. In such products, DSP may be responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using GPS, and other kinds of image processing, video processing, audio processing, and speech processing.", "title": "Subfields" }, { "paragraph_id": 40, "text": "Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points.", "title": "Subfields" }, { "paragraph_id": 41, "text": "Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control.", "title": "Subfields" }, { "paragraph_id": 42, "text": "Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of embedded devices including video game consoles and DVD players. Computer engineers are involved in many hardware and software aspects of computing. Robots are one of the applications of computer engineering.", "title": "Subfields" }, { "paragraph_id": 43, "text": "Photonics and optics deals with the generation, transmission, amplification, modulation, detection, and analysis of electromagnetic radiation. The application of optics deals with design of optical instruments such as lenses, microscopes, telescopes, and other equipment that uses the properties of electromagnetic radiation. Other prominent applications of optics include electro-optical sensors and measurement systems, lasers, fiber optic communication systems, and optical disc systems (e.g. CD and DVD). Photonics builds heavily on optical technology, supplemented with modern developments such as optoelectronics (mostly involving semiconductors), laser systems, optical amplifiers and novel materials (e.g. metamaterials).", "title": "Subfields" }, { "paragraph_id": 44, "text": "Mechatronics is an engineering discipline which deals with the convergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various subsystems of aircraft and automobiles. Electronic systems design is the subject within electrical engineering that deals with the multi-disciplinary design issues of complex electrical and mechanical systems.", "title": "Related disciplines" }, { "paragraph_id": 45, "text": "The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already, such small devices, known as microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication.", "title": "Related disciplines" }, { "paragraph_id": 46, "text": "In aerospace engineering and robotics, an example is the most recent electric propulsion and ion propulsion.", "title": "Related disciplines" }, { "paragraph_id": 47, "text": "Electrical engineers typically possess an academic degree with a major in electrical engineering, electronics engineering, electrical engineering technology, or electrical and electronic engineering. The same fundamental principles are taught in all programs, though emphasis may vary according to title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics Engineering Technology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science, depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering. Initially such topics cover most, if not all, of the subdisciplines of electrical engineering. At some schools, the students can then choose to emphasize one or more subdisciplines towards the end of their courses of study.", "title": "Education" }, { "paragraph_id": 48, "text": "At many schools, electronic engineering is included as part of an electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others, electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered.", "title": "Education" }, { "paragraph_id": 49, "text": "Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (MEng/MSc), a Master of Engineering Management, a Doctor of Philosophy (PhD) in Engineering, an Engineering Doctorate (Eng.D.), or an Engineer's degree. The master's and engineer's degrees may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and some other European countries, Master of Engineering is often considered to be an undergraduate degree of slightly longer duration than the Bachelor of Engineering rather than a standalone postgraduate degree.", "title": "Education" }, { "paragraph_id": 50, "text": "In most countries, a bachelor's degree in engineering represents the first step towards professional certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada and South Africa), Chartered engineer or Incorporated Engineer (in India, Pakistan, the United Kingdom, Ireland and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union).", "title": "Professional practice" }, { "paragraph_id": 51, "text": "The advantages of licensure vary depending upon location. For example, in the United States and Canada \"only a licensed engineer may seal engineering work for public and private clients\". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations, such as building codes and legislation pertaining to environmental law.", "title": "Professional practice" }, { "paragraph_id": 52, "text": "Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide and holds over 3,000 conferences annually. The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe. Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participation in technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. An MIET(Member of the Institution of Engineering and Technology) is recognised in Europe as an Electrical and computer (technology) engineer.", "title": "Professional practice" }, { "paragraph_id": 53, "text": "In Australia, Canada, and the United States electrical engineers make up around 0.25% of the labor force.", "title": "Professional practice" }, { "paragraph_id": 54, "text": "From the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test, and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunication systems, the operation of electric power stations, the lighting and wiring of buildings, the design of household appliances, or the electrical control of industrial machinery.", "title": "Tools and work" }, { "paragraph_id": 55, "text": "Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others.", "title": "Tools and work" }, { "paragraph_id": 56, "text": "Although most electrical engineers will understand basic circuit theory (that is the interactions of elements such as resistors, capacitors, diodes, transistors, and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunication systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy, and the ability to understand the technical language and concepts that relate to electrical engineering.", "title": "Tools and work" }, { "paragraph_id": 57, "text": "A wide range of instrumentation is used by electrical engineers. For simple control circuits and alarms, a basic multimeter measuring voltage, current, and resistance may suffice. Where time-varying signals need to be studied, the oscilloscope is also an ubiquitous instrument. In RF engineering and high frequency telecommunications, spectrum analyzers and network analyzers are used. In some disciplines, safety can be a particular concern with instrumentation. For instance, medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal body fluids. Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different. Many disciplines of electrical engineering use tests specific to their discipline. Audio electronics engineers use audio test sets consisting of a signal generator and a meter, principally to measure level but also other parameters such as harmonic distortion and noise. Likewise, information technology have their own test sets, often specific to a particular data format, and the same is true of television broadcasting.", "title": "Tools and work" }, { "paragraph_id": 58, "text": "For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.", "title": "Tools and work" }, { "paragraph_id": 59, "text": "The workplaces of engineers are just as varied as the types of work they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, on board a Naval ship, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers, and other engineers.", "title": "Tools and work" }, { "paragraph_id": 60, "text": "Electrical engineering has an intimate relationship with the physical sciences. For instance, the physicist Lord Kelvin played a major role in the engineering of the first transatlantic telegraph cable. Conversely, the engineer Oliver Heaviside produced major work on the mathematics of transmission on telegraph cables. Electrical engineers are often required on major science projects. For instance, large particle accelerators such as CERN need electrical engineers to deal with many aspects of the project including the power distribution, the instrumentation, and the manufacture and installation of the superconducting electromagnets.", "title": "Tools and work" } ]
Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems which use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the latter half of the 19th century after the commercialization of the electric telegraph, the telephone, and electrical power generation, distribution, and use. Electrical engineering is now divided into a wide range of different fields, including computer engineering, systems engineering, power engineering, telecommunications, radio-frequency engineering, signal processing, instrumentation, photovoltaic cells, electronics, and optics and photonics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations including hardware engineering, power electronics, electromagnetics and waves, microwave engineering, nanotechnology, electrochemistry, renewable energies, mechatronics/control, and electrical materials science. Electrical engineers typically hold a degree in electrical engineering or electronic engineering. Practising engineers may have professional certification and be members of a professional body or an international standards organization. These include the International Electrotechnical Commission (IEC), the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology. Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from circuit theory to the management skills of a project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to sophisticated design and manufacturing software.
2001-09-29T12:21:31Z
2023-12-15T05:26:31Z
[ "Template:Use dmy dates", "Template:Sister project links", "Template:Harvnb", "Template:Div col end", "Template:Cite web", "Template:Cite encyclopedia", "Template:Webarchive", "Template:Ndash", "Template:Portal", "Template:Nbsp", "Template:Cbignore", "Template:Div col", "Template:Authority control", "Template:Short description", "Template:See also", "Template:Sfn", "Template:Reflist", "Template:ISBN", "Template:Efn", "Template:Main", "Template:Cite book", "Template:Cite journal", "Template:Library resources box", "Template:Glossaries of science and engineering", "Template:Infobox occupation", "Template:Notelist", "Template:Convert", "Template:Engineering fields" ]
https://en.wikipedia.org/wiki/Electrical_engineering
9,532
Electromagnetism
In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, two distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles, causing an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs exclusively between charged particles in relative motion. These two effects combine to create electromagnetic fields in the vicinity of charged particles, which can accelerate other charged particles via the Lorentz force. At high energy, the weak force and electromagnetic force are unified as a single electroweak force. The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays a crucial role in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators. Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it wasn't until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Besides providing a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, Maxwell's equations also predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Indeed, gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies. In the modern era, scientists have continued to refine the theorem of electromagnetism to take into account the effects of modern physics, including quantum mechanics and relativity. Indeed, the theoretical implications of electromagnetism, particularly the establishment of the speed of light based on properties of the "medium" of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Meanwhile, the field of quantum electrodynamics (QED) has modified Maxwell's equations to be consistent with the quantized nature of matter. In QED, the electromagnetic field is expressed in terms of discrete particles known as photons, which are also the physical quanta of light. Today, there exist many problems in electromagnetism that remain unsolved, such as the existence of magnetic monopoles, Abraham–Minkowski controversy, and the mechanism by which some organisms can sense electric and magnetic fields. Investigation into electromagnetic phenomena began as early as 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures). In Europe, electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments: In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism. His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy. This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies. Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community. An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated: A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars." The electromagnetic force is the second strongest of the four known fundamental forces. It operates with infinite range. The other fundamental forces are: All other forces (e.g., friction, contact forces) are derived from these four fundamental forces and they are known as non-fundamental forces. Roughly speaking, all the forces involved in interactions between atoms can be explained by the electromagnetic force acting between the electrically charged atomic nuclei and electrons of the atoms. Electromagnetic forces also explain how these particles carry momentum by their movement. This includes the forces we experience in "pushing" or "pulling" ordinary material objects, which result from the intermolecular forces that act between the individual molecules in our bodies and those in the objects. The electromagnetic force is also involved in all forms of chemical phenomena. A necessary part of understanding the intra-atomic and intermolecular forces is the effective force generated by the momentum of the electrons' movement, such that as electrons move between interacting atoms they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behaviour of matter at the molecular scale including its density is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves. In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10 May 1752 by Thomas-François Dalibard of France using a 40-foot-tall (12 m) iron rod instead of a kite and he successfully extracted electrical sparks from a cloud. One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation. A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law. One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.) In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.) The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Here is a list of common units related to electromagnetism: In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system. Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units. The study of electromagnetism informs electric circuits and semiconductor devices' construction.
[ { "paragraph_id": 0, "text": "In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, two distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles, causing an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs exclusively between charged particles in relative motion. These two effects combine to create electromagnetic fields in the vicinity of charged particles, which can accelerate other charged particles via the Lorentz force. At high energy, the weak force and electromagnetic force are unified as a single electroweak force.", "title": "" }, { "paragraph_id": 1, "text": "The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays a crucial role in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators.", "title": "" }, { "paragraph_id": 2, "text": "Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it wasn't until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Besides providing a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, Maxwell's equations also predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Indeed, gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies.", "title": "" }, { "paragraph_id": 3, "text": "In the modern era, scientists have continued to refine the theorem of electromagnetism to take into account the effects of modern physics, including quantum mechanics and relativity. Indeed, the theoretical implications of electromagnetism, particularly the establishment of the speed of light based on properties of the \"medium\" of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Meanwhile, the field of quantum electrodynamics (QED) has modified Maxwell's equations to be consistent with the quantized nature of matter. In QED, the electromagnetic field is expressed in terms of discrete particles known as photons, which are also the physical quanta of light. Today, there exist many problems in electromagnetism that remain unsolved, such as the existence of magnetic monopoles, Abraham–Minkowski controversy, and the mechanism by which some organisms can sense electric and magnetic fields.", "title": "" }, { "paragraph_id": 4, "text": "Investigation into electromagnetic phenomena began as early as 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures).", "title": "History" }, { "paragraph_id": 5, "text": "In Europe, electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments:", "title": "History" }, { "paragraph_id": 6, "text": "In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism.", "title": "History" }, { "paragraph_id": 7, "text": "His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy.", "title": "History" }, { "paragraph_id": 8, "text": "This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies.", "title": "History" }, { "paragraph_id": 9, "text": "Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community.", "title": "History" }, { "paragraph_id": 10, "text": "An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:", "title": "History" }, { "paragraph_id": 11, "text": "A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ...", "title": "History" }, { "paragraph_id": 12, "text": "E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be \"credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars.\"", "title": "History" }, { "paragraph_id": 13, "text": "The electromagnetic force is the second strongest of the four known fundamental forces. It operates with infinite range. The other fundamental forces are:", "title": "Fundamental forces" }, { "paragraph_id": 14, "text": "All other forces (e.g., friction, contact forces) are derived from these four fundamental forces and they are known as non-fundamental forces.", "title": "Fundamental forces" }, { "paragraph_id": 15, "text": "Roughly speaking, all the forces involved in interactions between atoms can be explained by the electromagnetic force acting between the electrically charged atomic nuclei and electrons of the atoms. Electromagnetic forces also explain how these particles carry momentum by their movement. This includes the forces we experience in \"pushing\" or \"pulling\" ordinary material objects, which result from the intermolecular forces that act between the individual molecules in our bodies and those in the objects. The electromagnetic force is also involved in all forms of chemical phenomena.", "title": "Fundamental forces" }, { "paragraph_id": 16, "text": "A necessary part of understanding the intra-atomic and intermolecular forces is the effective force generated by the momentum of the electrons' movement, such that as electrons move between interacting atoms they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behaviour of matter at the molecular scale including its density is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves.", "title": "Fundamental forces" }, { "paragraph_id": 17, "text": "In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10 May 1752 by Thomas-François Dalibard of France using a 40-foot-tall (12 m) iron rod instead of a kite and he successfully extracted electrical sparks from a cloud.", "title": "Classical electrodynamics" }, { "paragraph_id": 18, "text": "One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation.", "title": "Classical electrodynamics" }, { "paragraph_id": 19, "text": "A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.", "title": "Classical electrodynamics" }, { "paragraph_id": 20, "text": "One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.)", "title": "Classical electrodynamics" }, { "paragraph_id": 21, "text": "In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term \"electromagnetism\". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.)", "title": "Classical electrodynamics" }, { "paragraph_id": 22, "text": "The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations.", "title": "Extension to nonlinear phenomena" }, { "paragraph_id": 23, "text": "Here is a list of common units related to electromagnetism:", "title": "Quantities and units" }, { "paragraph_id": 24, "text": "In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system.", "title": "Quantities and units" }, { "paragraph_id": 25, "text": "Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit \"sub-systems\", including Gaussian, \"ESU\", \"EMU\", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase \"CGS units\" is often used to refer specifically to CGS-Gaussian units.", "title": "Quantities and units" }, { "paragraph_id": 26, "text": "The study of electromagnetism informs electric circuits and semiconductor devices' construction.", "title": "Applications" } ]
In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, two distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles, causing an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs exclusively between charged particles in relative motion. These two effects combine to create electromagnetic fields in the vicinity of charged particles, which can accelerate other charged particles via the Lorentz force. At high energy, the weak force and electromagnetic force are unified as a single electroweak force. The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays a crucial role in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators. Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it wasn't until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Besides providing a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, Maxwell's equations also predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Indeed, gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies. In the modern era, scientists have continued to refine the theorem of electromagnetism to take into account the effects of modern physics, including quantum mechanics and relativity. Indeed, the theoretical implications of electromagnetism, particularly the establishment of the speed of light based on properties of the "medium" of propagation, helped inspire Einstein's theory of special relativity in 1905. Meanwhile, the field of quantum electrodynamics (QED) has modified Maxwell's equations to be consistent with the quantized nature of matter. In QED, the electromagnetic field is expressed in terms of discrete particles known as photons, which are also the physical quanta of light. Today, there exist many problems in electromagnetism that remain unsolved, such as the existence of magnetic monopoles, Abraham–Minkowski controversy, and the mechanism by which some organisms can sense electric and magnetic fields.
2001-09-26T18:43:38Z
2023-12-11T07:16:32Z
[ "Template:Short description", "Template:Redirect", "Template:Nbsp", "Template:Refend", "Template:Magnetic states", "Template:SI electromagnetism units", "Template:Citation", "Template:Fundamental interactions", "Template:Electromagnetism", "Template:Vanchor", "Template:Div col", "Template:Reflist", "Template:Cite book", "Template:Redirect-synonym", "Template:Wikiquote", "Template:Shy", "Template:Convert", "Template:Cite web", "Template:Library resources box", "Template:Refbegin", "Template:Pp-semi-indef", "Template:For introduction", "Template:See also", "Template:Div col end", "Template:Main", "Template:Cite journal", "Template:Cite thesis", "Template:Branches of physics", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Electromagnetism
9,534
Euphemism
A euphemism (/ˈjuːfəmɪzəm/ YOO-fə-miz-əm) is an innocuous word or expression used in place of one that is deemed offensive or suggests something unpleasant. Some euphemisms are intended to amuse, while others use bland, inoffensive terms for concepts that the user wishes to downplay. Euphemisms may be used to mask profanity or refer to topics some consider taboo such as disability, sex, excretion, or death in a polite way. Euphemism comes from the Greek word euphemia (εὐφημία) which refers to the use of 'words of good omen'; it is a compound of eû (εὖ), meaning 'good, well', and phḗmē (φήμη), meaning 'prophetic speech; rumour, talk'. Eupheme is a reference to the female Greek spirit of words of praise and positivity, etc. The term euphemism itself was used as a euphemism by the ancient Greeks; with the meaning "to keep a holy silence" (speaking well by not speaking at all). Reasons for using euphemisms vary by context and intent. Commonly, euphemisms are used to avoid directly addressing subjects that might be deemed negative or embarrassing, e.g., death, sex, excretory bodily functions. They may be created for innocent, well-intentioned purposes or nefariously and cynically, intentionally to deceive and confuse. Euphemisms are also used to mitigate, soften or downplay the gravity of large-scale injustices, war crimes, or other events that warrant a pattern of avoidance in official statements or documents. For instance, one reason for the comparative scarcity of written evidence documenting the exterminations at Auschwitz, relative to their sheer number, is "directives for the extermination process obscured in bureaucratic euphemisms". Another example of this is during the 2022 Russian invasion of Ukraine, where Russian President Vladimir Putin, in his speech starting the invasion, called the invasion a "special military operation". Euphemisms are sometimes used to lessen the opposition to a political move. For example, according to linguist Ghil'ad Zuckermann, Israeli Prime Minister Benjamin Netanyahu used the neutral Hebrew lexical item פעימות peimót (literally 'beatings (of the heart)'), rather than נסיגה nesigá ('withdrawal'), to refer to the stages in the Israeli withdrawal from the West Bank (see Wye River Memorandum), in order to lessen the opposition of right-wing Israelis to such a move. Peimót was thus used as a euphemism for 'withdrawal'. Euphemism may be used as a rhetorical strategy, in which case its goal is to change the valence of a description. The act of labeling a term as a euphemism can in itself be controversial, as in the following examples: The use of euphemism online is known as "algospeak" and is used to evade automated online moderation techniques used on Meta and TikTok's platforms. Algospeak has been used in debate about the Israeli–Palestinian conflict. Phonetic euphemism is used to replace profanities and blasphemies, diminishing their intensity. To alter the pronunciation or spelling of a taboo word (such as a swear word) to form a euphemism is known as taboo deformation, or a minced oath. Such modifications include: Euphemisms formed from understatements include asleep for dead and drinking for consuming alcohol. "Tired and emotional" is a notorious British euphemism for "drunk", one of many recurring jokes popularized by the satirical magazine Private Eye; it has been used by MPs to avoid unparliamentary language. Pleasant, positive, worthy, neutral, or nondescript terms are often substituted for explicit or unpleasant ones, with many substituted terms deliberately coined by sociopolitical movements, marketing, public relations, or advertising initiatives, including: Some examples of Cockney rhyming slang may serve the same purpose: to call a person a berk sounds less offensive than to call a person a cunt, though berk is short for Berkeley Hunt, which rhymes with cunt. The use of a term with a softer connotation, though it shares the same meaning. For instance, screwed up is a euphemism for 'fucked up'; hook-up and laid are euphemisms for 'sexual intercourse'. Expressions or words from a foreign language may be imported for use as euphemism. For example, the French word enceinte was sometimes used instead of the English word pregnant; abattoir for slaughterhouse, although in French the word retains its explicit violent meaning 'a place for beating down', conveniently lost on non-French speakers. Entrepreneur for businessman, adds glamour; douche (French for 'shower') for vaginal irrigation device; bidet ('little pony') for vessel for anal washing. Ironically, although in English physical "handicaps" are almost always described with euphemism, in French the English word handicap is used as a euphemism for their problematic words infirmité or invalidité. Periphrasis, or circumlocution, is one of the most common: to "speak around" a given word, implying it without saying it. Over time, circumlocutions become recognized as established euphemisms for particular words or ideas. Bureaucracies frequently spawn euphemisms intentionally, as doublespeak expressions. For example, in the past, the US military used the term "sunshine units" for contamination by radioactive isotopes. The United States Central Intelligence Agency refers to systematic torture as "enhanced interrogation techniques". An effective death sentence in the Soviet Union during the Great Purge often used the clause "imprisonment without right to correspondence": the person sentenced would be shot soon after conviction. As early as 1939, Nazi official Reinhard Heydrich used the term Sonderbehandlung ("special treatment") to mean summary execution of persons viewed as "disciplinary problems" by the Nazis even before commencing the systematic extermination of the Jews. Heinrich Himmler, aware that the word had come to be known to mean murder, replaced that euphemism with one in which Jews would be "guided" (to their deaths) through the slave-labor and extermination camps after having been "evacuated" to their doom. Such was part of the formulation of Endlösung der Judenfrage (the "Final Solution to the Jewish Question"), which became known to the outside world during the Nuremberg Trials. Frequently, over time, euphemisms themselves become taboo words, through the linguistic process of semantic change known as pejoration, which University of Oregon linguist Sharon Henderson Taylor dubbed the "euphemism cycle" in 1974, also frequently referred to as the "euphemism treadmill". For instance, the place of human defecation is a needy candidate for a euphemism in all eras. Toilet is an 18th-century euphemism, replacing the older euphemism house-of-office, which in turn replaced the even older euphemisms privy-house and bog-house. In the 20th century, where the old euphemisms lavatory (a place where one washes) and toilet (a place where one dresses) had grown from widespread usage (e.g., in the United States) to being synonymous with the crude act they sought to deflect, they were sometimes replaced with bathroom (a place where one bathes), washroom (a place where one washes), or restroom (a place where one rests) or even by the extreme form powder room (a place where one applies facial cosmetics). The form water closet, often shortened to W.C., is a less deflective form. The word shit appears to have originally been a euphemism for defecation in Pre-Germanic, as the Proto-Indo-European root *sḱeyd-, from which it was derived, meant 'to cut off'. Another example in American English is the replacement of "colored people" with "Negro" (euphemism by foreign language), which itself came to be replaced by either "African American" or "Black". Also in the United States the term "ethnic minorities" in the 2010s has been replaced by "people of color". Venereal disease, which associated shameful bacterial infection with a seemingly worthy ailment emanating from Venus, the goddess of love, soon lost its deflective force in the post-classical education era, as "VD", which was replaced by the three-letter initialism "STD" (sexually transmitted disease); later, "STD" was replaced by "STI" (sexually transmitted infection). Intellectually-disabled people were originally defined with words such as "morons" or "imbeciles", which then became commonly used insults. The medical diagnosis was changed to "mentally retarded", which morphed into a pejorative against those with intellectual disabilities. To avoid the negative connotations of their diagnoses, students who need accommodations because of such conditions are often labeled as "special needs" instead, although the words "special" or "sped" (short for "special education") have long been schoolyard insults. As of August 2013, the Social Security Administration replaced the term "mental retardation" with "intellectual disability". Since 2012, that change in terminology has been adopted by the National Institutes of Health and the medical industry at large. There are numerous disability-related euphemisms that have negative connotations.
[ { "paragraph_id": 0, "text": "A euphemism (/ˈjuːfəmɪzəm/ YOO-fə-miz-əm) is an innocuous word or expression used in place of one that is deemed offensive or suggests something unpleasant. Some euphemisms are intended to amuse, while others use bland, inoffensive terms for concepts that the user wishes to downplay. Euphemisms may be used to mask profanity or refer to topics some consider taboo such as disability, sex, excretion, or death in a polite way.", "title": "" }, { "paragraph_id": 1, "text": "Euphemism comes from the Greek word euphemia (εὐφημία) which refers to the use of 'words of good omen'; it is a compound of eû (εὖ), meaning 'good, well', and phḗmē (φήμη), meaning 'prophetic speech; rumour, talk'. Eupheme is a reference to the female Greek spirit of words of praise and positivity, etc. The term euphemism itself was used as a euphemism by the ancient Greeks; with the meaning \"to keep a holy silence\" (speaking well by not speaking at all).", "title": "Etymology" }, { "paragraph_id": 2, "text": "Reasons for using euphemisms vary by context and intent. Commonly, euphemisms are used to avoid directly addressing subjects that might be deemed negative or embarrassing, e.g., death, sex, excretory bodily functions. They may be created for innocent, well-intentioned purposes or nefariously and cynically, intentionally to deceive and confuse.", "title": "Purpose" }, { "paragraph_id": 3, "text": "Euphemisms are also used to mitigate, soften or downplay the gravity of large-scale injustices, war crimes, or other events that warrant a pattern of avoidance in official statements or documents. For instance, one reason for the comparative scarcity of written evidence documenting the exterminations at Auschwitz, relative to their sheer number, is \"directives for the extermination process obscured in bureaucratic euphemisms\". Another example of this is during the 2022 Russian invasion of Ukraine, where Russian President Vladimir Putin, in his speech starting the invasion, called the invasion a \"special military operation\".", "title": "Purpose" }, { "paragraph_id": 4, "text": "Euphemisms are sometimes used to lessen the opposition to a political move. For example, according to linguist Ghil'ad Zuckermann, Israeli Prime Minister Benjamin Netanyahu used the neutral Hebrew lexical item פעימות peimót (literally 'beatings (of the heart)'), rather than נסיגה nesigá ('withdrawal'), to refer to the stages in the Israeli withdrawal from the West Bank (see Wye River Memorandum), in order to lessen the opposition of right-wing Israelis to such a move. Peimót was thus used as a euphemism for 'withdrawal'.", "title": "Purpose" }, { "paragraph_id": 5, "text": "Euphemism may be used as a rhetorical strategy, in which case its goal is to change the valence of a description.", "title": "Purpose" }, { "paragraph_id": 6, "text": "The act of labeling a term as a euphemism can in itself be controversial, as in the following examples:", "title": "Controversial use" }, { "paragraph_id": 7, "text": "The use of euphemism online is known as \"algospeak\" and is used to evade automated online moderation techniques used on Meta and TikTok's platforms. Algospeak has been used in debate about the Israeli–Palestinian conflict.", "title": "Controversial use" }, { "paragraph_id": 8, "text": "Phonetic euphemism is used to replace profanities and blasphemies, diminishing their intensity. To alter the pronunciation or spelling of a taboo word (such as a swear word) to form a euphemism is known as taboo deformation, or a minced oath. Such modifications include:", "title": "Formation methods" }, { "paragraph_id": 9, "text": "Euphemisms formed from understatements include asleep for dead and drinking for consuming alcohol. \"Tired and emotional\" is a notorious British euphemism for \"drunk\", one of many recurring jokes popularized by the satirical magazine Private Eye; it has been used by MPs to avoid unparliamentary language.", "title": "Formation methods" }, { "paragraph_id": 10, "text": "Pleasant, positive, worthy, neutral, or nondescript terms are often substituted for explicit or unpleasant ones, with many substituted terms deliberately coined by sociopolitical movements, marketing, public relations, or advertising initiatives, including:", "title": "Formation methods" }, { "paragraph_id": 11, "text": "Some examples of Cockney rhyming slang may serve the same purpose: to call a person a berk sounds less offensive than to call a person a cunt, though berk is short for Berkeley Hunt, which rhymes with cunt.", "title": "Formation methods" }, { "paragraph_id": 12, "text": "The use of a term with a softer connotation, though it shares the same meaning. For instance, screwed up is a euphemism for 'fucked up'; hook-up and laid are euphemisms for 'sexual intercourse'.", "title": "Formation methods" }, { "paragraph_id": 13, "text": "Expressions or words from a foreign language may be imported for use as euphemism. For example, the French word enceinte was sometimes used instead of the English word pregnant; abattoir for slaughterhouse, although in French the word retains its explicit violent meaning 'a place for beating down', conveniently lost on non-French speakers. Entrepreneur for businessman, adds glamour; douche (French for 'shower') for vaginal irrigation device; bidet ('little pony') for vessel for anal washing. Ironically, although in English physical \"handicaps\" are almost always described with euphemism, in French the English word handicap is used as a euphemism for their problematic words infirmité or invalidité.", "title": "Formation methods" }, { "paragraph_id": 14, "text": "Periphrasis, or circumlocution, is one of the most common: to \"speak around\" a given word, implying it without saying it. Over time, circumlocutions become recognized as established euphemisms for particular words or ideas.", "title": "Formation methods" }, { "paragraph_id": 15, "text": "Bureaucracies frequently spawn euphemisms intentionally, as doublespeak expressions. For example, in the past, the US military used the term \"sunshine units\" for contamination by radioactive isotopes. The United States Central Intelligence Agency refers to systematic torture as \"enhanced interrogation techniques\". An effective death sentence in the Soviet Union during the Great Purge often used the clause \"imprisonment without right to correspondence\": the person sentenced would be shot soon after conviction. As early as 1939, Nazi official Reinhard Heydrich used the term Sonderbehandlung (\"special treatment\") to mean summary execution of persons viewed as \"disciplinary problems\" by the Nazis even before commencing the systematic extermination of the Jews. Heinrich Himmler, aware that the word had come to be known to mean murder, replaced that euphemism with one in which Jews would be \"guided\" (to their deaths) through the slave-labor and extermination camps after having been \"evacuated\" to their doom. Such was part of the formulation of Endlösung der Judenfrage (the \"Final Solution to the Jewish Question\"), which became known to the outside world during the Nuremberg Trials.", "title": "Doublespeak" }, { "paragraph_id": 16, "text": "Frequently, over time, euphemisms themselves become taboo words, through the linguistic process of semantic change known as pejoration, which University of Oregon linguist Sharon Henderson Taylor dubbed the \"euphemism cycle\" in 1974, also frequently referred to as the \"euphemism treadmill\". For instance, the place of human defecation is a needy candidate for a euphemism in all eras. Toilet is an 18th-century euphemism, replacing the older euphemism house-of-office, which in turn replaced the even older euphemisms privy-house and bog-house. In the 20th century, where the old euphemisms lavatory (a place where one washes) and toilet (a place where one dresses) had grown from widespread usage (e.g., in the United States) to being synonymous with the crude act they sought to deflect, they were sometimes replaced with bathroom (a place where one bathes), washroom (a place where one washes), or restroom (a place where one rests) or even by the extreme form powder room (a place where one applies facial cosmetics). The form water closet, often shortened to W.C., is a less deflective form. The word shit appears to have originally been a euphemism for defecation in Pre-Germanic, as the Proto-Indo-European root *sḱeyd-, from which it was derived, meant 'to cut off'.", "title": "Lifespan " }, { "paragraph_id": 17, "text": "Another example in American English is the replacement of \"colored people\" with \"Negro\" (euphemism by foreign language), which itself came to be replaced by either \"African American\" or \"Black\". Also in the United States the term \"ethnic minorities\" in the 2010s has been replaced by \"people of color\".", "title": "Lifespan " }, { "paragraph_id": 18, "text": "Venereal disease, which associated shameful bacterial infection with a seemingly worthy ailment emanating from Venus, the goddess of love, soon lost its deflective force in the post-classical education era, as \"VD\", which was replaced by the three-letter initialism \"STD\" (sexually transmitted disease); later, \"STD\" was replaced by \"STI\" (sexually transmitted infection).", "title": "Lifespan " }, { "paragraph_id": 19, "text": "Intellectually-disabled people were originally defined with words such as \"morons\" or \"imbeciles\", which then became commonly used insults. The medical diagnosis was changed to \"mentally retarded\", which morphed into a pejorative against those with intellectual disabilities. To avoid the negative connotations of their diagnoses, students who need accommodations because of such conditions are often labeled as \"special needs\" instead, although the words \"special\" or \"sped\" (short for \"special education\") have long been schoolyard insults. As of August 2013, the Social Security Administration replaced the term \"mental retardation\" with \"intellectual disability\". Since 2012, that change in terminology has been adopted by the National Institutes of Health and the medical industry at large. There are numerous disability-related euphemisms that have negative connotations.", "title": "Lifespan " } ]
A euphemism is an innocuous word or expression used in place of one that is deemed offensive or suggests something unpleasant. Some euphemisms are intended to amuse, while others use bland, inoffensive terms for concepts that the user wishes to downplay. Euphemisms may be used to mask profanity or refer to topics some consider taboo such as disability, sex, excretion, or death in a polite way.
2001-09-19T19:21:29Z
2023-12-27T21:58:17Z
[ "Template:Cite book", "Template:Clarify", "Template:Div col end", "Template:PIE", "Template:Crossreference", "Template:Unreferenced section", "Template:Respell", "Template:See also", "Template:Main article", "Template:Reflist", "Template:Cite dictionary", "Template:Cite web", "Template:Rp", "Template:Anchor", "Template:Better source needed", "Template:Cite magazine", "Template:Cite news", "Template:Cite journal", "Template:Wiktionary-inline", "Template:Media manipulation", "Template:Lang", "Template:More citations needed", "Template:Censorship", "Template:Use dmy dates", "Template:Div col", "Template:Figures of speech", "Template:Authority control", "Template:Citation needed", "Template:Cite report", "Template:IPAc-en", "Template:Cite EB1911", "Template:Short description", "Template:Original research" ]
https://en.wikipedia.org/wiki/Euphemism
9,536
Edmund Spenser
Edmund Spenser (/ˈspɛnsər/; 1552/1553 – 13 January O.S. 1599) was an English poet best known for The Faerie Queene, an epic poem and fantastical allegory celebrating the Tudor dynasty and Elizabeth I. He is recognized as one of the premier craftsmen of nascent Modern English verse, and he is considered one of the great poets in the English language. Edmund Spenser was born in East Smithfield, London, around the year 1552; however, there is still some ambiguity as to the exact date of his birth. His parenthood is obscure, but he was probably the son of John Spenser, a journeyman clothmaker. As a young boy, he was educated in London at the Merchant Taylors' School and matriculated as a sizar at Pembroke College, Cambridge. While at Cambridge he became a friend of Gabriel Harvey and later consulted him, despite their differing views on poetry. In 1578, he became for a short time secretary to John Young, Bishop of Rochester. In 1579, he published The Shepheardes Calender and around the same time married his first wife, Machabyas Childe. They had two children, Sylvanus (d. 1638) and Katherine. In July 1580, Spenser went to Ireland in service of the newly appointed Lord Deputy, Arthur Grey, 14th Baron Grey de Wilton. Spenser served under Lord Grey with Walter Raleigh at the Siege of Smerwick massacre. When Lord Grey was recalled to England, Spenser stayed on in Ireland, having acquired other official posts and lands in the Munster Plantation. Raleigh acquired other nearby Munster estates confiscated in the Second Desmond Rebellion. Sometime between 1587 and 1589, Spenser acquired his main estate at Kilcolman, near Doneraile in North Cork. He later bought a second holding to the south, at Rennie, on a rock overlooking the river Blackwater in North Cork. Its ruins are still visible today. A short distance away grew a tree, locally known as "Spenser's Oak" until it was destroyed in a lightning strike in the 1960s. Local legend claims that he penned some of The Faerie Queene under this tree. In 1590, Spenser brought out the first three books of his most famous work, The Faerie Queene, having travelled to London to publish and promote the work, with the likely assistance of Raleigh. He was successful enough to obtain a life pension of £50 a year from the Queen. He probably hoped to secure a place at court through his poetry, but his next significant publication boldly antagonised the queen's principal secretary, Lord Burghley (William Cecil), through its inclusion of the satirical Mother Hubberd's Tale. He returned to Ireland. He was at the centre of a literary circle whose members included his lifelong friend Lodowick Bryskett and Dr. John Longe, Archbishop of Armagh. In 1591, Spenser published a translation in verse of Joachim Du Bellay's sonnets, Les Antiquités de Rome, which had been published in 1558. Spenser's version, Ruines of Rome: by Bellay, may also have been influenced by Latin poems on the same subject, written by Jean or Janis Vitalis and published in 1576. By 1594, Spenser's first wife had died, and in that year he married a much younger Elizabeth Boyle, a relative of Richard Boyle, 1st Earl of Cork. He addressed to her the sonnet sequence Amoretti. The marriage was celebrated in Epithalamion. They had a son named Peregrine. In 1596, Spenser wrote a prose pamphlet titled A View of the Present State of Irelande. This piece, in the form of a dialogue, circulated in manuscript, remaining unpublished until the mid-17th century. It is probable that it was kept out of print during the author's lifetime because of its inflammatory content. The pamphlet argued that Ireland would never be totally "pacified" by the English until its indigenous language and customs had been destroyed, if necessary by violence. In 1598, during the Nine Years' War, Spenser was driven from his home by the native Irish forces of Aodh Ó Néill. His castle at Kilcolman was burned, and Ben Jonson, who may have had private information, asserted that one of his infant children died in the blaze. In the year after being driven from his home, 1599, Spenser travelled to London, where he died at the age of forty-six – "for want of bread", according to Ben Jonson; one of Jonson's more doubtful statements, since Spenser had a payment to him authorised by the government and was due his pension. His coffin was carried to his grave in Poets' Corner in Westminster Abbey by other poets, who threw many pens and pieces of poetry into his grave with many tears. His second wife survived him and remarried twice. His sister Sarah, who had accompanied him to Ireland, married into the Travers family, and her descendants were prominent landowners in Cork for centuries. Thomas Fuller, in Worthies of England, included a story where the Queen told her treasurer, William Cecil, to pay Spenser £100 for his poetry. The treasurer, however, objected that the sum was too much. She said, "Then give him what is reason". Without receiving his payment in due time, Spenser gave the Queen this quatrain on one of her progresses: I was promis'd on a time, To have a reason for my rhyme: From that time unto this season, I receiv'd nor rhyme nor reason. She immediately ordered the treasurer to pay Spenser the original £100. This story seems to have attached itself to Spenser from Thomas Churchyard, who apparently had difficulty in getting payment of his pension, the only other pension Elizabeth awarded to a poet. Spenser seems to have had no difficulty in receiving payment when it was due as the pension was being collected for him by his publisher, Ponsonby. The Shepheardes Calender is Edmund Spenser's first major work, which appeared in 1579. It emulates Virgil's Eclogues of the first century BCE and the Eclogues of Mantuan by Baptista Mantuanus, a late medieval, early renaissance poet. An eclogue is a short pastoral poem that is in the form of a dialogue or soliloquy. Although all the months together form an entire year, each month stands alone as a separate poem. Editions of the late 16th and early 17th centuries include woodcuts for each month/poem, and thereby have a slight similarity to an emblem book which combines a number of self-contained pictures and texts, usually a short vignette, saying, or allegory with an accompanying illustration. Spenser's masterpiece is the epic poem The Faerie Queene. The first three books of The Faerie Queene were published in 1590, and the second set of three books was published in 1596. Spenser originally indicated that he intended the poem to consist of twelve books, so the version of the poem we have today is incomplete. Despite this, it remains one of the longest poems in the English language. It is an allegorical work, and can be read (as Spenser presumably intended) on several levels of allegory, including as praise of Queen Elizabeth I. In a completely allegorical context, the poem follows several knights in an examination of several virtues. In Spenser's "A Letter of the Authors", he states that the entire epic poem is "cloudily enwrapped in allegorical devises", and that the aim behind The Faerie Queene was to "fashion a gentleman or noble person in virtuous and gentle discipline". Spenser published numerous relatively short poems in the last decade of the 16th century, almost all of which consider love or sorrow. In 1591, he published Complaints, a collection of poems that express complaints in mournful or mocking tones. Four years later, in 1595, Spenser published Amoretti and Epithalamion. This volume contains eighty-eight sonnets commemorating his courtship of Elizabeth Boyle. In Amoretti, Spenser uses subtle humour and parody while praising his beloved, reworking Petrarchism in his treatment of longing for a woman. Epithalamion, similar to Amoretti, deals in part with the unease in the development of a romantic and sexual relationship. It was written for his wedding to his young bride, Elizabeth Boyle. Some have speculated that the attention to disquiet, in general, reflects Spenser's personal anxieties at the time, as he was unable to complete his most significant work, The Faerie Queene. In the following year, Spenser released Prothalamion, a wedding song written for the daughters of a duke, allegedly in hopes to gain favour in the court. Spenser used a distinctive verse form, called the Spenserian stanza, in several works, including The Faerie Queene. The stanza's main metre is iambic pentameter with a final line in iambic hexameter (having six feet or stresses, known as an Alexandrine), and the rhyme scheme is ababbcbcc. He also used his own rhyme scheme for the sonnet. In a Spenserian sonnet, the last line of every quatrain is linked with the first line of the next one, yielding the rhyme scheme ababbcbccdcdee. "Men Call you Fayre" is a fine Sonnet from Amoretti. The poet presents the concept of true beauty in the poem. He addresses the sonnet to his beloved, Elizabeth Boyle, and presents his courtship. Like all Renaissance men, Edmund Spenser believed that love is an inexhaustible source of beauty and order. In this Sonnet, the poet expresses his idea of true beauty. The physical beauty will finish after a few days; it is not a permanent beauty. He emphasises beauty of mind and beauty of intellect. He considers his beloved is not simply flesh but is also a spiritual being. The poet opines that he is beloved born of heavenly seed and she is derived from fair spirit. The poet states that because of her clean mind, pure heart and sharp intellect, men call her fair and she deserves it. At the end, the poet praises her spiritual beauty and he worships her because of her Divine Soul. Though Spenser was well-read in classical literature, scholars have noted that his poetry does not rehash tradition, but rather is distinctly his. This individuality may have resulted, to some extent, from a lack of comprehension of the classics. Spenser strove to emulate such ancient Roman poets as Virgil and Ovid, whom he studied during his schooling, but many of his best-known works are notably divergent from those of his predecessors. The language of his poetry is purposely archaic, reminiscent of earlier works such as The Canterbury Tales of Geoffrey Chaucer and Il Canzoniere of Petrarch, whom Spenser greatly admired. An Anglican and a devotee of the Protestant Queen Elizabeth, Spenser was particularly offended by the anti-Elizabethan propaganda that some Catholics circulated. Like most Protestants near the time of the Reformation, Spenser saw a Catholic church full of corruption, and he determined that it was not only the wrong religion but the anti-religion. This sentiment is an important backdrop for the battles of The Faerie Queene. Spenser was called "the Poet's Poet" by Charles Lamb, and was admired by John Milton, William Blake, William Wordsworth, John Keats, Lord Byron, Alfred Tennyson and others. Among his contemporaries Walter Raleigh wrote a commendatory poem to The Faerie Queene in 1590 in which he claims to admire and value Spenser's work more so than any other in the English language. John Milton in his Areopagitica mentions "our sage and serious poet Spenser, whom I dare be known to think a better teacher than Scotus or Aquinas". In the 18th century, Alexander Pope compared Spenser to "a mistress, whose faults we see, but love her with them all". In his work A View of the Present State of Irelande (1596), Spenser discussed future plans to establish control over Ireland, the most recent Irish uprising, led by Hugh O'Neill having demonstrated the futility of previous efforts. The work is partly a defence of Lord Arthur Grey de Wilton, who was appointed Lord Deputy of Ireland in 1580, and who greatly influenced Spenser's thinking on Ireland. The goal of the piece was to show that Ireland was in great need of reform. Spenser believed that "Ireland is a diseased portion of the State, it must first be cured and reformed, before it could be in a position to appreciate the good sound laws and blessings of the nation". In A View of the Present State of Ireland, Spenser categorises the "evils" of the Irish people into three prominent categories: laws, customs and religion. According to Spenser, these three elements worked together in creating the supposedly "disruptive and degraded people" who inhabited the country. One example given in the work is the Irish law system termed "Brehon law", which at the time trumped the established law as dictated by the Crown. The Brehon system had its own court and methods of punishing infractions committed. Spenser viewed this system as a backward custom which contributed to the "degradation" of the Irish people. A particular legal punishment viewed with distaste by Spenser was the Brehon method of dealing with murder, which was to impose an éraic (fine) on the murderer's family. From Spenser's viewpoint, the appropriate punishment for murder was capital punishment. Spenser also warned of the dangers that allowing the education of children in the Irish language would bring: "Soe that the speach being Irish, the hart must needes be Irishe; for out of the aboundance of the hart, the tonge speaketh". He pressed for a scorched earth policy in Ireland, noting its effectiveness in the Second Desmond Rebellion: "'Out of everye corner of the woode and glenns they came creepinge forth upon theire handes, for theire legges could not beare them; they looked Anatomies [of] death, they spake like ghostes, crying out of theire graves; they did eate of the carrions, happye wheare they could find them, yea, and one another soone after, in soe much as the verye carcasses they spared not to scrape out of theire graves; and if they found a plott of water-cresses or shamrockes, theyr they flocked as to a feast… in a shorte space there were none almost left, and a most populous and plentyfull countrye suddenly lefte voyde of man or beast: yett sure in all that warr, there perished not manye by the sworde, but all by the extreamytie of famine ... they themselves had wrought.'" 1569: 1579: 1590: 1591: 1592: 1595: 1596: Posthumous: Washington University in St. Louis professor Joseph Lowenstein, with the assistance of several undergraduate students, has been involved in creating, editing, and annotating a digital archive of the first publication of poet Edmund Spenser's collective works in 100 years. A large grant from the National Endowment for the Humanities has been given to support this ambitious project centralized at Washington University with support from other colleges in the United States.
[ { "paragraph_id": 0, "text": "Edmund Spenser (/ˈspɛnsər/; 1552/1553 – 13 January O.S. 1599) was an English poet best known for The Faerie Queene, an epic poem and fantastical allegory celebrating the Tudor dynasty and Elizabeth I. He is recognized as one of the premier craftsmen of nascent Modern English verse, and he is considered one of the great poets in the English language.", "title": "" }, { "paragraph_id": 1, "text": "Edmund Spenser was born in East Smithfield, London, around the year 1552; however, there is still some ambiguity as to the exact date of his birth. His parenthood is obscure, but he was probably the son of John Spenser, a journeyman clothmaker. As a young boy, he was educated in London at the Merchant Taylors' School and matriculated as a sizar at Pembroke College, Cambridge. While at Cambridge he became a friend of Gabriel Harvey and later consulted him, despite their differing views on poetry. In 1578, he became for a short time secretary to John Young, Bishop of Rochester. In 1579, he published The Shepheardes Calender and around the same time married his first wife, Machabyas Childe. They had two children, Sylvanus (d. 1638) and Katherine.", "title": "Life" }, { "paragraph_id": 2, "text": "In July 1580, Spenser went to Ireland in service of the newly appointed Lord Deputy, Arthur Grey, 14th Baron Grey de Wilton. Spenser served under Lord Grey with Walter Raleigh at the Siege of Smerwick massacre. When Lord Grey was recalled to England, Spenser stayed on in Ireland, having acquired other official posts and lands in the Munster Plantation. Raleigh acquired other nearby Munster estates confiscated in the Second Desmond Rebellion. Sometime between 1587 and 1589, Spenser acquired his main estate at Kilcolman, near Doneraile in North Cork. He later bought a second holding to the south, at Rennie, on a rock overlooking the river Blackwater in North Cork. Its ruins are still visible today. A short distance away grew a tree, locally known as \"Spenser's Oak\" until it was destroyed in a lightning strike in the 1960s. Local legend claims that he penned some of The Faerie Queene under this tree.", "title": "Life" }, { "paragraph_id": 3, "text": "In 1590, Spenser brought out the first three books of his most famous work, The Faerie Queene, having travelled to London to publish and promote the work, with the likely assistance of Raleigh. He was successful enough to obtain a life pension of £50 a year from the Queen. He probably hoped to secure a place at court through his poetry, but his next significant publication boldly antagonised the queen's principal secretary, Lord Burghley (William Cecil), through its inclusion of the satirical Mother Hubberd's Tale. He returned to Ireland. He was at the centre of a literary circle whose members included his lifelong friend Lodowick Bryskett and Dr. John Longe, Archbishop of Armagh.", "title": "Life" }, { "paragraph_id": 4, "text": "In 1591, Spenser published a translation in verse of Joachim Du Bellay's sonnets, Les Antiquités de Rome, which had been published in 1558. Spenser's version, Ruines of Rome: by Bellay, may also have been influenced by Latin poems on the same subject, written by Jean or Janis Vitalis and published in 1576.", "title": "Life" }, { "paragraph_id": 5, "text": "By 1594, Spenser's first wife had died, and in that year he married a much younger Elizabeth Boyle, a relative of Richard Boyle, 1st Earl of Cork. He addressed to her the sonnet sequence Amoretti. The marriage was celebrated in Epithalamion. They had a son named Peregrine.", "title": "Life" }, { "paragraph_id": 6, "text": "In 1596, Spenser wrote a prose pamphlet titled A View of the Present State of Irelande. This piece, in the form of a dialogue, circulated in manuscript, remaining unpublished until the mid-17th century. It is probable that it was kept out of print during the author's lifetime because of its inflammatory content. The pamphlet argued that Ireland would never be totally \"pacified\" by the English until its indigenous language and customs had been destroyed, if necessary by violence.", "title": "Life" }, { "paragraph_id": 7, "text": "In 1598, during the Nine Years' War, Spenser was driven from his home by the native Irish forces of Aodh Ó Néill. His castle at Kilcolman was burned, and Ben Jonson, who may have had private information, asserted that one of his infant children died in the blaze.", "title": "Life" }, { "paragraph_id": 8, "text": "In the year after being driven from his home, 1599, Spenser travelled to London, where he died at the age of forty-six – \"for want of bread\", according to Ben Jonson; one of Jonson's more doubtful statements, since Spenser had a payment to him authorised by the government and was due his pension. His coffin was carried to his grave in Poets' Corner in Westminster Abbey by other poets, who threw many pens and pieces of poetry into his grave with many tears. His second wife survived him and remarried twice. His sister Sarah, who had accompanied him to Ireland, married into the Travers family, and her descendants were prominent landowners in Cork for centuries.", "title": "Life" }, { "paragraph_id": 9, "text": "Thomas Fuller, in Worthies of England, included a story where the Queen told her treasurer, William Cecil, to pay Spenser £100 for his poetry. The treasurer, however, objected that the sum was too much. She said, \"Then give him what is reason\". Without receiving his payment in due time, Spenser gave the Queen this quatrain on one of her progresses:", "title": "Rhyme and reason" }, { "paragraph_id": 10, "text": "I was promis'd on a time, To have a reason for my rhyme: From that time unto this season, I receiv'd nor rhyme nor reason.", "title": "Rhyme and reason" }, { "paragraph_id": 11, "text": "She immediately ordered the treasurer to pay Spenser the original £100.", "title": "Rhyme and reason" }, { "paragraph_id": 12, "text": "This story seems to have attached itself to Spenser from Thomas Churchyard, who apparently had difficulty in getting payment of his pension, the only other pension Elizabeth awarded to a poet. Spenser seems to have had no difficulty in receiving payment when it was due as the pension was being collected for him by his publisher, Ponsonby.", "title": "Rhyme and reason" }, { "paragraph_id": 13, "text": "The Shepheardes Calender is Edmund Spenser's first major work, which appeared in 1579. It emulates Virgil's Eclogues of the first century BCE and the Eclogues of Mantuan by Baptista Mantuanus, a late medieval, early renaissance poet. An eclogue is a short pastoral poem that is in the form of a dialogue or soliloquy. Although all the months together form an entire year, each month stands alone as a separate poem. Editions of the late 16th and early 17th centuries include woodcuts for each month/poem, and thereby have a slight similarity to an emblem book which combines a number of self-contained pictures and texts, usually a short vignette, saying, or allegory with an accompanying illustration.", "title": "The Shepheardes Calender" }, { "paragraph_id": 14, "text": "Spenser's masterpiece is the epic poem The Faerie Queene. The first three books of The Faerie Queene were published in 1590, and the second set of three books was published in 1596. Spenser originally indicated that he intended the poem to consist of twelve books, so the version of the poem we have today is incomplete. Despite this, it remains one of the longest poems in the English language. It is an allegorical work, and can be read (as Spenser presumably intended) on several levels of allegory, including as praise of Queen Elizabeth I. In a completely allegorical context, the poem follows several knights in an examination of several virtues. In Spenser's \"A Letter of the Authors\", he states that the entire epic poem is \"cloudily enwrapped in allegorical devises\", and that the aim behind The Faerie Queene was to \"fashion a gentleman or noble person in virtuous and gentle discipline\".", "title": "The Faerie Queene" }, { "paragraph_id": 15, "text": "Spenser published numerous relatively short poems in the last decade of the 16th century, almost all of which consider love or sorrow. In 1591, he published Complaints, a collection of poems that express complaints in mournful or mocking tones. Four years later, in 1595, Spenser published Amoretti and Epithalamion. This volume contains eighty-eight sonnets commemorating his courtship of Elizabeth Boyle. In Amoretti, Spenser uses subtle humour and parody while praising his beloved, reworking Petrarchism in his treatment of longing for a woman. Epithalamion, similar to Amoretti, deals in part with the unease in the development of a romantic and sexual relationship. It was written for his wedding to his young bride, Elizabeth Boyle. Some have speculated that the attention to disquiet, in general, reflects Spenser's personal anxieties at the time, as he was unable to complete his most significant work, The Faerie Queene. In the following year, Spenser released Prothalamion, a wedding song written for the daughters of a duke, allegedly in hopes to gain favour in the court.", "title": "Shorter poems" }, { "paragraph_id": 16, "text": "Spenser used a distinctive verse form, called the Spenserian stanza, in several works, including The Faerie Queene. The stanza's main metre is iambic pentameter with a final line in iambic hexameter (having six feet or stresses, known as an Alexandrine), and the rhyme scheme is ababbcbcc. He also used his own rhyme scheme for the sonnet. In a Spenserian sonnet, the last line of every quatrain is linked with the first line of the next one, yielding the rhyme scheme ababbcbccdcdee. \"Men Call you Fayre\" is a fine Sonnet from Amoretti. The poet presents the concept of true beauty in the poem. He addresses the sonnet to his beloved, Elizabeth Boyle, and presents his courtship. Like all Renaissance men, Edmund Spenser believed that love is an inexhaustible source of beauty and order. In this Sonnet, the poet expresses his idea of true beauty. The physical beauty will finish after a few days; it is not a permanent beauty. He emphasises beauty of mind and beauty of intellect. He considers his beloved is not simply flesh but is also a spiritual being. The poet opines that he is beloved born of heavenly seed and she is derived from fair spirit. The poet states that because of her clean mind, pure heart and sharp intellect, men call her fair and she deserves it. At the end, the poet praises her spiritual beauty and he worships her because of her Divine Soul.", "title": "The Spenserian stanza and sonnet" }, { "paragraph_id": 17, "text": "Though Spenser was well-read in classical literature, scholars have noted that his poetry does not rehash tradition, but rather is distinctly his. This individuality may have resulted, to some extent, from a lack of comprehension of the classics. Spenser strove to emulate such ancient Roman poets as Virgil and Ovid, whom he studied during his schooling, but many of his best-known works are notably divergent from those of his predecessors. The language of his poetry is purposely archaic, reminiscent of earlier works such as The Canterbury Tales of Geoffrey Chaucer and Il Canzoniere of Petrarch, whom Spenser greatly admired.", "title": "Influences" }, { "paragraph_id": 18, "text": "An Anglican and a devotee of the Protestant Queen Elizabeth, Spenser was particularly offended by the anti-Elizabethan propaganda that some Catholics circulated. Like most Protestants near the time of the Reformation, Spenser saw a Catholic church full of corruption, and he determined that it was not only the wrong religion but the anti-religion. This sentiment is an important backdrop for the battles of The Faerie Queene.", "title": "Influences" }, { "paragraph_id": 19, "text": "Spenser was called \"the Poet's Poet\" by Charles Lamb, and was admired by John Milton, William Blake, William Wordsworth, John Keats, Lord Byron, Alfred Tennyson and others. Among his contemporaries Walter Raleigh wrote a commendatory poem to The Faerie Queene in 1590 in which he claims to admire and value Spenser's work more so than any other in the English language. John Milton in his Areopagitica mentions \"our sage and serious poet Spenser, whom I dare be known to think a better teacher than Scotus or Aquinas\". In the 18th century, Alexander Pope compared Spenser to \"a mistress, whose faults we see, but love her with them all\".", "title": "Influences" }, { "paragraph_id": 20, "text": "In his work A View of the Present State of Irelande (1596), Spenser discussed future plans to establish control over Ireland, the most recent Irish uprising, led by Hugh O'Neill having demonstrated the futility of previous efforts. The work is partly a defence of Lord Arthur Grey de Wilton, who was appointed Lord Deputy of Ireland in 1580, and who greatly influenced Spenser's thinking on Ireland.", "title": "A View of the Present State of Irelande" }, { "paragraph_id": 21, "text": "The goal of the piece was to show that Ireland was in great need of reform. Spenser believed that \"Ireland is a diseased portion of the State, it must first be cured and reformed, before it could be in a position to appreciate the good sound laws and blessings of the nation\". In A View of the Present State of Ireland, Spenser categorises the \"evils\" of the Irish people into three prominent categories: laws, customs and religion. According to Spenser, these three elements worked together in creating the supposedly \"disruptive and degraded people\" who inhabited the country. One example given in the work is the Irish law system termed \"Brehon law\", which at the time trumped the established law as dictated by the Crown. The Brehon system had its own court and methods of punishing infractions committed. Spenser viewed this system as a backward custom which contributed to the \"degradation\" of the Irish people. A particular legal punishment viewed with distaste by Spenser was the Brehon method of dealing with murder, which was to impose an éraic (fine) on the murderer's family. From Spenser's viewpoint, the appropriate punishment for murder was capital punishment. Spenser also warned of the dangers that allowing the education of children in the Irish language would bring: \"Soe that the speach being Irish, the hart must needes be Irishe; for out of the aboundance of the hart, the tonge speaketh\".", "title": "A View of the Present State of Irelande" }, { "paragraph_id": 22, "text": "He pressed for a scorched earth policy in Ireland, noting its effectiveness in the Second Desmond Rebellion:", "title": "A View of the Present State of Irelande" }, { "paragraph_id": 23, "text": "\"'Out of everye corner of the woode and glenns they came creepinge forth upon theire handes, for theire legges could not beare them; they looked Anatomies [of] death, they spake like ghostes, crying out of theire graves; they did eate of the carrions, happye wheare they could find them, yea, and one another soone after, in soe much as the verye carcasses they spared not to scrape out of theire graves; and if they found a plott of water-cresses or shamrockes, theyr they flocked as to a feast… in a shorte space there were none almost left, and a most populous and plentyfull countrye suddenly lefte voyde of man or beast: yett sure in all that warr, there perished not manye by the sworde, but all by the extreamytie of famine ... they themselves had wrought.'\"", "title": "A View of the Present State of Irelande" }, { "paragraph_id": 24, "text": "1569:", "title": "List of works" }, { "paragraph_id": 25, "text": "1579:", "title": "List of works" }, { "paragraph_id": 26, "text": "1590:", "title": "List of works" }, { "paragraph_id": 27, "text": "1591:", "title": "List of works" }, { "paragraph_id": 28, "text": "1592:", "title": "List of works" }, { "paragraph_id": 29, "text": "1595:", "title": "List of works" }, { "paragraph_id": 30, "text": "1596:", "title": "List of works" }, { "paragraph_id": 31, "text": "Posthumous:", "title": "List of works" }, { "paragraph_id": 32, "text": "Washington University in St. Louis professor Joseph Lowenstein, with the assistance of several undergraduate students, has been involved in creating, editing, and annotating a digital archive of the first publication of poet Edmund Spenser's collective works in 100 years. A large grant from the National Endowment for the Humanities has been given to support this ambitious project centralized at Washington University with support from other colleges in the United States.", "title": "Digital archive" } ]
Edmund Spenser was an English poet best known for The Faerie Queene, an epic poem and fantastical allegory celebrating the Tudor dynasty and Elizabeth I. He is recognized as one of the premier craftsmen of nascent Modern English verse, and he is considered one of the great poets in the English language.
2001-06-26T15:17:30Z
2023-12-11T14:35:20Z
[ "Template:IPAc-en", "Template:Cite news", "Template:The Faerie Queene", "Template:Main", "Template:Not a typo", "Template:Cite book", "Template:Succession box", "Template:Authority control", "Template:Sfn", "Template:Cite encyclopedia", "Template:Wikisource author", "Template:Wikiquote", "Template:Internet Archive author", "Template:NPG name", "Template:Infobox writer", "Template:Citation", "Template:Edmund Spenser", "Template:Short description", "Template:Cite web", "Template:Cite journal", "Template:S-off", "Template:S-end", "Template:S-start", "Template:Use British English", "Template:Webarchive", "Template:ISBN", "Template:Commons category", "Template:StandardEbooks", "Template:UK National Archives ID", "Template:Use dmy dates", "Template:Reflist", "Template:Acad", "Template:Gutenberg author", "Template:Librivox author" ]
https://en.wikipedia.org/wiki/Edmund_Spenser
9,540
Electricity generation
Electricity generation is the process of generating electric power from sources of primary energy. For utilities in the electric power industry, it is the stage prior to its delivery (transmission, distribution, etc.) to end users or its storage (using, for example, the pumped-storage method). Usable electricity is not freely available in nature, so it must be "produced" (that is, transforming other forms of energy to electricity). Production is carried out in power stations (also called "power plants"). Electricity is most often generated at a power plant by electromechanical generators, primarily driven by heat engines fueled by combustion or nuclear fission but also by other means such as the kinetic energy of flowing water and wind. Other energy sources include solar photovoltaics and geothermal power. There are also exotic and speculative methods to recover energy, such as proposed fusion reactor designs which aim to directly extract energy from intense magnetic fields generated by fast-moving charged particles generated by the fusion reaction (see magnetohydrodynamics). Phasing out coal-fired power stations and eventually gas-fired power stations, or, if practical, capturing their greenhouse gas emissions, is an important part of the energy transformation required to limit climate change. Vastly more solar power and wind power is forecast to be required, with electricity demand increasing strongly with further electrification of transport, homes and industry. However, in 2023, it was reported that the global electricity supply was approaching peak CO2 emissions thanks to the growth of solar and wind power. The fundamental principles of electricity generation were discovered in the 1820s and early 1830s by British scientist Michael Faraday. His method, still used today, is for electricity to be generated by the movement of a loop of wire, or Faraday disc, between the poles of a magnet. Central power stations became economically practical with the development of alternating current (AC) power transmission, using power transformers to transmit power at high voltage and with low loss. Commercial electricity production started with the coupling of the dynamo to the hydraulic turbine. The mechanical production of electric power began the Second Industrial Revolution and made possible several inventions using electricity, with the major contributors being Thomas Alva Edison and Nikola Tesla. Previously the only way to produce electricity was by chemical reactions or using battery cells, and the only practical use of electricity was for the telegraph. Electricity generation at central power stations started in 1882, when a steam engine driving a dynamo at Pearl Street Station produced a DC current that powered public lighting on Pearl Street, New York. The new technology was quickly adopted by many cities around the world, which adapted their gas-fueled street lights to electric power. Soon after electric lights would be used in public buildings, in businesses, and to power public transport, such as trams and trains. The first power plants used water power or coal. Today a variety of energy sources are used, such as coal, nuclear, natural gas, hydroelectric, wind, and oil, as well as solar energy, tidal power, and geothermal sources. In the 1880s the popularity of electricity grew massively with the introduction of the Incandescent light bulb. Although there are 22 recognised inventors of the light bulb prior to Joseph Swan and Thomas Edison, Edison and Swan's invention became by far the most successful and popular of all. During the early years of the 19th century, massive jumps in electrical sciences were made. And by the later 19th century the advancement of electrical technology and engineering led to electricity being part of everyday life. With the introduction of many electrical inventions and their implementation into everyday life, the demand for electricity within homes grew dramatically. With this increase in demand, the potential for profit was seen by many entrepreneurs who began investing into electrical systems to eventually create the first electricity public utilities. This process in history is often described as electrification. The earliest distribution of electricity came from companies operating independently of one another. A consumer would purchase electricity from a producer, and the producer would distribute it through their own power grid. As technology improved so did the productivity and efficiency of its generation. Inventions such as the steam turbine had a massive impact on the efficiency of electrical generation but also the economics of generation as well. This conversion of heat energy into mechanical work was similar to that of steam engines, however at a significantly larger scale and far more productively. The improvements of these large-scale generation plants were critical to the process of centralised generation as they would become vital to the entire power system that we now use today. Throughout the middle of the 20th century many utilities began merging their distribution networks due to economic and efficiency benefits. Along with the invention of long-distance power transmission, the coordination of power plants began to form. This system was then secured by regional system operators to ensure stability and reliability. The electrification of homes began in Northern Europe and in the Northern America in the 1920s in large cities and urban areas. It was not until the 1930s that rural areas saw the large-scale establishment of electrification. 2021 world electricity generation by source (total generation was 28 petawatt-hours) Several fundamental methods exist to convert other forms of energy into electrical energy. Utility-scale generation is achieved by rotating electric generators or by photovoltaic systems. A small proportion of electric power distributed by utilities is provided by batteries. Other forms of electricity generation used in niche applications include the triboelectric effect, the piezoelectric effect, the thermoelectric effect, and betavoltaics. Electric generators transform kinetic energy into electricity. This is the most used form for generating electricity and is based on Faraday's law. It can be seen experimentally by rotating a magnet within closed loops of conducting material (e.g. copper wire). Almost all commercial electrical generation is done using electromagnetic induction, in which mechanical energy forces a generator to rotate. Electrochemistry is the direct transformation of chemical energy into electricity, as in a battery. Electrochemical electricity generation is important in portable and mobile applications. Currently, most electrochemical power comes from batteries. Primary cells, such as the common zinc–carbon batteries, act as power sources directly, but secondary cells (i.e. rechargeable batteries) are used for storage systems rather than primary generation systems. Open electrochemical systems, known as fuel cells, can be used to extract power either from natural fuels or from synthesized fuels. Osmotic power is a possibility at places where salt and fresh water merge. The photovoltaic effect is the transformation of light into electrical energy, as in solar cells. Photovoltaic panels convert sunlight directly to DC electricity. Power inverters can then convert that to AC electricity if needed. Although sunlight is free and abundant, solar power electricity is still usually more expensive to produce than large-scale mechanically generated power due to the cost of the panels. Low-efficiency silicon solar cells have been decreasing in cost and multijunction cells with close to 30% conversion efficiency are now commercially available. Over 40% efficiency has been demonstrated in experimental systems. Until recently, photovoltaics were most commonly used in remote sites where there is no access to a commercial power grid, or as a supplemental electricity source for individual homes and businesses. Recent advances in manufacturing efficiency and photovoltaic technology, combined with subsidies driven by environmental concerns, have dramatically accelerated the deployment of solar panels. Installed capacity is growing by around 20% per year led by increases in Germany, Japan, United States, China, and India. The selection of electricity production modes and their economic viability varies in accordance with demand and region. The economics vary considerably around the world, resulting in widespread residential selling prices. Hydroelectric plants, nuclear power plants, thermal power plants and renewable sources have their own pros and cons, and selection is based upon the local power requirement and the fluctuations in demand. All power grids have varying loads on them but the daily minimum is the base load, often supplied by plants which run continuously. Nuclear, coal, oil, gas and some hydro plants can supply base load. If well construction costs for natural gas are below $10 per MWh, generating electricity from natural gas is cheaper than generating power by burning coal. Nuclear power plants can produce a huge amount of power from a single unit. However, nuclear disasters have raised concerns over the safety of nuclear power, and the capital cost of nuclear plants is very high. Hydroelectric power plants are located in areas where the potential energy from falling water can be harnessed for moving turbines and the generation of power. It may not be an economically viable single source of production where the ability to store the flow of water is limited and the load varies too much during the annual production cycle. Electric generators were known in simple forms from the discovery of electromagnetic induction in the 1830s. In general, some form of prime mover such as an engine or the turbines described above, drives a rotating magnetic field past stationary coils of wire thereby turning mechanical energy into electricity. The only commercial scale electricity production that does not employ a generator is solar PV. Almost all commercial electrical power on Earth is generated with a turbine, driven by wind, water, steam or burning gas. The turbine drives a generator, thus transforming its mechanical energy into electrical energy by electromagnetic induction. There are many different methods of developing mechanical energy, including heat engines, hydro, wind and tidal power. Most electric generation is driven by heat engines. The combustion of fossil fuels supplies most of the energy to these engines, with a significant fraction from nuclear fission and some from renewable sources. The modern steam turbine (invented by Sir Charles Parsons in 1884) currently generates about 80% of the electric power in the world using a variety of heat sources. Turbine types include: Turbines can also use other heat-transfer liquids than steam. Supercritical carbon dioxide based cycles can provide higher conversion efficiency due to faster heat exchange, higher energy density and simpler power cycle infrastructure. Supercritical carbon dioxide blends, that are currently in development, can further increase efficiency by optimizing its critical pressure and temperature points. Although turbines are most common in commercial power generation, smaller generators can be powered by gasoline or diesel engines. These may used for backup generation or as a prime source of power within isolated villages. Total worldwide gross production of electricity in 2016 was 25 082 TWh. Sources of electricity were coal and peat 38.3%, natural gas 23.1%, hydroelectric 16.6%, nuclear power 10.4%, oil 3.7%, solar/wind/geothermal/tidal/other 5.6%, biomass and waste 2.3%. In 2021, Wind and solar generated electricity reached 10% of globally produced electricity. Clean sources (Solar and wind and other) generated 38% of the world's electricity. The United States has long been the largest producer and consumer of electricity, with a global share in 2005 of at least 25%, followed by China, Japan, Russia, and India. In 2011, China overtook the United States to become the largest producer of electricity. Variations between countries generating electrical power affect concerns about the environment. In France only 10% of electricity is generated from fossil fuels, the US is higher at 70% and China is at 80%. The cleanliness of electricity depends on its source. Methane leaks (from natural gas to fuel gas-fired power plants) and carbon dioxide emissions from fossil fuel-based electricity generation account for a significant portion of world greenhouse gas emissions. In the United States, fossil fuel combustion for electric power generation is responsible for 65% of all emissions of sulfur dioxide, the main component of acid rain. Electricity generation is the fourth highest combined source of NOx, carbon monoxide, and particulate matter in the US. According to the International Energy Agency (IEA), low-carbon electricity generation needs to account for 85% of global electrical output by 2040 in order to ward off the worst effects of climate change. Like other organizations including the Energy Impact Center (EIC) and the United Nations Economic Commission for Europe (UNECE), the IEA has called for the expansion of nuclear and renewable energy to meet that objective. Some, like EIC founder Bret Kugelmass, believe that nuclear power is the primary method for decarbonizing electricity generation because it can also power direct air capture that removes existing carbon emissions from the atmosphere. Nuclear power plants can also create district heating and desalination projects, limiting carbon emissions and the need for expanded electrical output. A fundamental issue regarding centralised generation and the current electrical generation methods in use today is the significant negative environmental effects that many of the generation processes have. Processes such as coal and gas not only release carbon dioxide as they combust, but their extraction from the ground also impacts the environment. Open pit coal mines use large areas of land to extract coal and limit the potential for productive land use after the excavation. Natural gas extraction releases large amounts of methane into the atmosphere when extracted from the ground greatly increase global greenhouse gases. Although nuclear power plants do not release carbon dioxide through electricity generation, there are risks associated with nuclear waste and safety concerns associated with the use of nuclear sources. Per unit of electricity generated coal and gas-fired power life-cycle greenhouse gas emissions are almost always at least ten times that of other generation methods. Centralised generation is electricity generation by large-scale centralised facilities, sent through transmission lines to consumers. These facilities are usually located far away from consumers and distribute the electricity through high voltage transmission lines to a substation, where it is then distributed to consumers; the basic concept being that multi-megawatt or gigawatt scale large stations create electricity for a large number of people. The vast majority of electricity used is created from centralised generation. Most centralised power generation comes from large power plants run by fossil fuels such as coal or natural gas, though nuclear or large hydroelectricity plants are also commonly used. Centralised generation is fundamentally the opposite of distributed generation. Distributed generation is the small-scale generation of electricity to smaller groups of consumers. This can also include independently producing electricity by either solar or wind power. In recent years distributed generation as has seen a spark in popularity due to its propensity to use renewable energy generation methods such as rooftop solar. Centralised energy sources are large power plants that produce huge amounts of electricity to a large number of consumers. Most power plants used in centralised generation are thermal power plants meaning that they use a fuel to heat steam to produce a pressurised gas which in turn spins a turbine and generates electricity. This is the traditional way of producing energy. This process relies on several forms of technology to produce widespread electricity, these being natural coal, gas and nuclear forms of thermal generation. More recently solar and wind have become large scale. A photovoltaic power station, also known as a solar park, solar farm, or solar power plant, is a large-scale grid-connected photovoltaic power system (PV system) designed for the supply of merchant power. They are different from most building-mounted and other decentralized solar power because they supply power at the utility level, rather than to a local user or users. Utility-scale solar is sometimes used to describe this type of project. This approach differs from concentrated solar power, the other major large-scale solar generation technology, which uses heat to drive a variety of conventional generator systems. Both approaches have their own advantages and disadvantages, but to date, for a variety of reasons, photovoltaic technology has seen much wider use. As of 2019, about 97% of utility-scale solar power capacity was PV. In some countries, the nameplate capacity of photovoltaic power stations is rated in megawatt-peak (MWp), which refers to the solar array's theoretical maximum DC power output. In other countries, the manufacturer states the surface and the efficiency. However, Canada, Japan, Spain, and the United States often specify using the converted lower nominal power output in MWAC, a measure more directly comparable to other forms of power generation. Most solar parks are developed at a scale of at least 1 MWp. As of 2018, the world's largest operating photovoltaic power stations surpassed 1 gigawatt. At the end of 2019, about 9,000 solar farms were larger than 4 MWAC (utility scale), with a combined capacity of over 220 GWAC. A wind farm or wind park, also called a wind power station or wind power plant, is a group of wind turbines in the same location used to produce electricity. Wind farms vary in size from a small number of turbines to several hundred wind turbines covering an extensive area. Wind farms can be either onshore or offshore. Many of the largest operational onshore wind farms are located in China, India, and the United States. For example, the largest wind farm in the world, Gansu Wind Farm in China had a capacity of over 6,000 MW by 2012, with a goal of 20,000 MW by 2020. As of December 2020, the 1218 MW Hornsea Wind Farm in the UK is the largest offshore wind farm in the world. Individual wind turbine designs continue to increase in power, resulting in fewer turbines being needed for the same total output. A coal-fired power station or coal power plant is a thermal power station which burns coal to generate electricity. Worldwide there are over 2,400 coal-fired power stations, totaling over 2,000 gigawatts capacity. They generate about a third of the world's electricity, but cause many illnesses and the most early deaths, mainly from air pollution. A coal-fired power station is a type of fossil fuel power station. The coal is usually pulverized and then burned in a pulverized coal-fired boiler. The furnace heat converts boiler water to steam, which is then used to spin turbines that turn generators. Thus chemical energy stored in coal is converted successively into thermal energy, mechanical energy and, finally, electrical energy. Natural gas is ignited to create pressurised gas which is used to spin turbines to generate electricity. Natural gas plants use a gas turbine where natural gas is added along with oxygen which in turn combusts and expands through the turbine to force a generator to spin. Natural gas power plants are more efficient than coal power generation, they however contribute to climate change but not as highly as coal generation. Not only do they produce carbon dioxide from the ignition of natural gas, but also the extraction of gas when mined releases a significant amount of methane into the atmosphere. Nuclear power plants create electricity through steam turbines where the heat input is from the process of nuclear fission. Currently, nuclear power produces 11% of all electricity in the world. Most nuclear reactors use uranium as a source of fuel. In a process called nuclear fission, energy, in the form of heat, is released when nuclear atoms are split. Electricity is created through the use of a nuclear reactor where heat produced by nuclear fission is used to produce steam which in turn spins turbines and powers the generators. Although there are several types of nuclear reactors, all fundamentally use this process. Normal emissions due to nuclear power plants are primarily waste heat and radioactive spent fuel. In a reactor accident, significant amounts of radioisotopes can be released to the environment, posing a long term hazard to life. This hazard has been a continuing concern of environmentalists. Accidents such as the Three Mile Island accident, Chernobyl disaster and the Fukushima nuclear disaster illustrate this problem. The table lists 45 countries with their total electricity capacities. The data is from 2022. According to the Energy Information Administration, the total global electricity capacity in 2022 was nearly 8.9 terawatt (TW), more than four times the total global electricity capacity in 1981. The global average per-capita electricity capacity was about 1,120 watts in 2022, nearly two and a half times the global average per-capita electricity capacity in 1981. Iceland has the highest installed capacity per capita in the world, at about 8,990 watts. All developed countries have an average per-capita electricity capacity above the global average per-capita electricity capacity, with the United Kingdom having the lowest average per-capita electricity capacity of all other developed countries.
[ { "paragraph_id": 0, "text": "Electricity generation is the process of generating electric power from sources of primary energy. For utilities in the electric power industry, it is the stage prior to its delivery (transmission, distribution, etc.) to end users or its storage (using, for example, the pumped-storage method).", "title": "" }, { "paragraph_id": 1, "text": "Usable electricity is not freely available in nature, so it must be \"produced\" (that is, transforming other forms of energy to electricity). Production is carried out in power stations (also called \"power plants\"). Electricity is most often generated at a power plant by electromechanical generators, primarily driven by heat engines fueled by combustion or nuclear fission but also by other means such as the kinetic energy of flowing water and wind. Other energy sources include solar photovoltaics and geothermal power. There are also exotic and speculative methods to recover energy, such as proposed fusion reactor designs which aim to directly extract energy from intense magnetic fields generated by fast-moving charged particles generated by the fusion reaction (see magnetohydrodynamics).", "title": "" }, { "paragraph_id": 2, "text": "Phasing out coal-fired power stations and eventually gas-fired power stations, or, if practical, capturing their greenhouse gas emissions, is an important part of the energy transformation required to limit climate change. Vastly more solar power and wind power is forecast to be required, with electricity demand increasing strongly with further electrification of transport, homes and industry. However, in 2023, it was reported that the global electricity supply was approaching peak CO2 emissions thanks to the growth of solar and wind power.", "title": "" }, { "paragraph_id": 3, "text": "The fundamental principles of electricity generation were discovered in the 1820s and early 1830s by British scientist Michael Faraday. His method, still used today, is for electricity to be generated by the movement of a loop of wire, or Faraday disc, between the poles of a magnet. Central power stations became economically practical with the development of alternating current (AC) power transmission, using power transformers to transmit power at high voltage and with low loss.", "title": "History" }, { "paragraph_id": 4, "text": "Commercial electricity production started with the coupling of the dynamo to the hydraulic turbine. The mechanical production of electric power began the Second Industrial Revolution and made possible several inventions using electricity, with the major contributors being Thomas Alva Edison and Nikola Tesla. Previously the only way to produce electricity was by chemical reactions or using battery cells, and the only practical use of electricity was for the telegraph.", "title": "History" }, { "paragraph_id": 5, "text": "Electricity generation at central power stations started in 1882, when a steam engine driving a dynamo at Pearl Street Station produced a DC current that powered public lighting on Pearl Street, New York. The new technology was quickly adopted by many cities around the world, which adapted their gas-fueled street lights to electric power. Soon after electric lights would be used in public buildings, in businesses, and to power public transport, such as trams and trains.", "title": "History" }, { "paragraph_id": 6, "text": "The first power plants used water power or coal. Today a variety of energy sources are used, such as coal, nuclear, natural gas, hydroelectric, wind, and oil, as well as solar energy, tidal power, and geothermal sources.", "title": "History" }, { "paragraph_id": 7, "text": "In the 1880s the popularity of electricity grew massively with the introduction of the Incandescent light bulb. Although there are 22 recognised inventors of the light bulb prior to Joseph Swan and Thomas Edison, Edison and Swan's invention became by far the most successful and popular of all. During the early years of the 19th century, massive jumps in electrical sciences were made. And by the later 19th century the advancement of electrical technology and engineering led to electricity being part of everyday life. With the introduction of many electrical inventions and their implementation into everyday life, the demand for electricity within homes grew dramatically. With this increase in demand, the potential for profit was seen by many entrepreneurs who began investing into electrical systems to eventually create the first electricity public utilities. This process in history is often described as electrification.", "title": "History" }, { "paragraph_id": 8, "text": "The earliest distribution of electricity came from companies operating independently of one another. A consumer would purchase electricity from a producer, and the producer would distribute it through their own power grid. As technology improved so did the productivity and efficiency of its generation. Inventions such as the steam turbine had a massive impact on the efficiency of electrical generation but also the economics of generation as well. This conversion of heat energy into mechanical work was similar to that of steam engines, however at a significantly larger scale and far more productively. The improvements of these large-scale generation plants were critical to the process of centralised generation as they would become vital to the entire power system that we now use today.", "title": "History" }, { "paragraph_id": 9, "text": "Throughout the middle of the 20th century many utilities began merging their distribution networks due to economic and efficiency benefits. Along with the invention of long-distance power transmission, the coordination of power plants began to form. This system was then secured by regional system operators to ensure stability and reliability. The electrification of homes began in Northern Europe and in the Northern America in the 1920s in large cities and urban areas. It was not until the 1930s that rural areas saw the large-scale establishment of electrification.", "title": "History" }, { "paragraph_id": 10, "text": "2021 world electricity generation by source (total generation was 28 petawatt-hours)", "title": "Methods of generation" }, { "paragraph_id": 11, "text": "Several fundamental methods exist to convert other forms of energy into electrical energy. Utility-scale generation is achieved by rotating electric generators or by photovoltaic systems. A small proportion of electric power distributed by utilities is provided by batteries. Other forms of electricity generation used in niche applications include the triboelectric effect, the piezoelectric effect, the thermoelectric effect, and betavoltaics.", "title": "Methods of generation" }, { "paragraph_id": 12, "text": "Electric generators transform kinetic energy into electricity. This is the most used form for generating electricity and is based on Faraday's law. It can be seen experimentally by rotating a magnet within closed loops of conducting material (e.g. copper wire). Almost all commercial electrical generation is done using electromagnetic induction, in which mechanical energy forces a generator to rotate.", "title": "Methods of generation" }, { "paragraph_id": 13, "text": "Electrochemistry is the direct transformation of chemical energy into electricity, as in a battery. Electrochemical electricity generation is important in portable and mobile applications. Currently, most electrochemical power comes from batteries. Primary cells, such as the common zinc–carbon batteries, act as power sources directly, but secondary cells (i.e. rechargeable batteries) are used for storage systems rather than primary generation systems. Open electrochemical systems, known as fuel cells, can be used to extract power either from natural fuels or from synthesized fuels. Osmotic power is a possibility at places where salt and fresh water merge.", "title": "Methods of generation" }, { "paragraph_id": 14, "text": "The photovoltaic effect is the transformation of light into electrical energy, as in solar cells. Photovoltaic panels convert sunlight directly to DC electricity. Power inverters can then convert that to AC electricity if needed. Although sunlight is free and abundant, solar power electricity is still usually more expensive to produce than large-scale mechanically generated power due to the cost of the panels. Low-efficiency silicon solar cells have been decreasing in cost and multijunction cells with close to 30% conversion efficiency are now commercially available. Over 40% efficiency has been demonstrated in experimental systems. Until recently, photovoltaics were most commonly used in remote sites where there is no access to a commercial power grid, or as a supplemental electricity source for individual homes and businesses. Recent advances in manufacturing efficiency and photovoltaic technology, combined with subsidies driven by environmental concerns, have dramatically accelerated the deployment of solar panels. Installed capacity is growing by around 20% per year led by increases in Germany, Japan, United States, China, and India.", "title": "Methods of generation" }, { "paragraph_id": 15, "text": "The selection of electricity production modes and their economic viability varies in accordance with demand and region. The economics vary considerably around the world, resulting in widespread residential selling prices. Hydroelectric plants, nuclear power plants, thermal power plants and renewable sources have their own pros and cons, and selection is based upon the local power requirement and the fluctuations in demand. All power grids have varying loads on them but the daily minimum is the base load, often supplied by plants which run continuously. Nuclear, coal, oil, gas and some hydro plants can supply base load. If well construction costs for natural gas are below $10 per MWh, generating electricity from natural gas is cheaper than generating power by burning coal.", "title": "Economics" }, { "paragraph_id": 16, "text": "Nuclear power plants can produce a huge amount of power from a single unit. However, nuclear disasters have raised concerns over the safety of nuclear power, and the capital cost of nuclear plants is very high. Hydroelectric power plants are located in areas where the potential energy from falling water can be harnessed for moving turbines and the generation of power. It may not be an economically viable single source of production where the ability to store the flow of water is limited and the load varies too much during the annual production cycle.", "title": "Economics" }, { "paragraph_id": 17, "text": "Electric generators were known in simple forms from the discovery of electromagnetic induction in the 1830s. In general, some form of prime mover such as an engine or the turbines described above, drives a rotating magnetic field past stationary coils of wire thereby turning mechanical energy into electricity. The only commercial scale electricity production that does not employ a generator is solar PV.", "title": "Generating equipment" }, { "paragraph_id": 18, "text": "Almost all commercial electrical power on Earth is generated with a turbine, driven by wind, water, steam or burning gas. The turbine drives a generator, thus transforming its mechanical energy into electrical energy by electromagnetic induction. There are many different methods of developing mechanical energy, including heat engines, hydro, wind and tidal power. Most electric generation is driven by heat engines. The combustion of fossil fuels supplies most of the energy to these engines, with a significant fraction from nuclear fission and some from renewable sources. The modern steam turbine (invented by Sir Charles Parsons in 1884) currently generates about 80% of the electric power in the world using a variety of heat sources. Turbine types include:", "title": "Generating equipment" }, { "paragraph_id": 19, "text": "Turbines can also use other heat-transfer liquids than steam. Supercritical carbon dioxide based cycles can provide higher conversion efficiency due to faster heat exchange, higher energy density and simpler power cycle infrastructure. Supercritical carbon dioxide blends, that are currently in development, can further increase efficiency by optimizing its critical pressure and temperature points.", "title": "Generating equipment" }, { "paragraph_id": 20, "text": "Although turbines are most common in commercial power generation, smaller generators can be powered by gasoline or diesel engines. These may used for backup generation or as a prime source of power within isolated villages.", "title": "Generating equipment" }, { "paragraph_id": 21, "text": "Total worldwide gross production of electricity in 2016 was 25 082 TWh. Sources of electricity were coal and peat 38.3%, natural gas 23.1%, hydroelectric 16.6%, nuclear power 10.4%, oil 3.7%, solar/wind/geothermal/tidal/other 5.6%, biomass and waste 2.3%.", "title": "Production" }, { "paragraph_id": 22, "text": "In 2021, Wind and solar generated electricity reached 10% of globally produced electricity. Clean sources (Solar and wind and other) generated 38% of the world's electricity.", "title": "Production" }, { "paragraph_id": 23, "text": "", "title": "Production" }, { "paragraph_id": 24, "text": "The United States has long been the largest producer and consumer of electricity, with a global share in 2005 of at least 25%, followed by China, Japan, Russia, and India. In 2011, China overtook the United States to become the largest producer of electricity.", "title": "Production" }, { "paragraph_id": 25, "text": "Variations between countries generating electrical power affect concerns about the environment. In France only 10% of electricity is generated from fossil fuels, the US is higher at 70% and China is at 80%. The cleanliness of electricity depends on its source. Methane leaks (from natural gas to fuel gas-fired power plants) and carbon dioxide emissions from fossil fuel-based electricity generation account for a significant portion of world greenhouse gas emissions. In the United States, fossil fuel combustion for electric power generation is responsible for 65% of all emissions of sulfur dioxide, the main component of acid rain. Electricity generation is the fourth highest combined source of NOx, carbon monoxide, and particulate matter in the US.", "title": "Environmental concerns" }, { "paragraph_id": 26, "text": "According to the International Energy Agency (IEA), low-carbon electricity generation needs to account for 85% of global electrical output by 2040 in order to ward off the worst effects of climate change. Like other organizations including the Energy Impact Center (EIC) and the United Nations Economic Commission for Europe (UNECE), the IEA has called for the expansion of nuclear and renewable energy to meet that objective. Some, like EIC founder Bret Kugelmass, believe that nuclear power is the primary method for decarbonizing electricity generation because it can also power direct air capture that removes existing carbon emissions from the atmosphere. Nuclear power plants can also create district heating and desalination projects, limiting carbon emissions and the need for expanded electrical output.", "title": "Environmental concerns" }, { "paragraph_id": 27, "text": "A fundamental issue regarding centralised generation and the current electrical generation methods in use today is the significant negative environmental effects that many of the generation processes have. Processes such as coal and gas not only release carbon dioxide as they combust, but their extraction from the ground also impacts the environment. Open pit coal mines use large areas of land to extract coal and limit the potential for productive land use after the excavation. Natural gas extraction releases large amounts of methane into the atmosphere when extracted from the ground greatly increase global greenhouse gases. Although nuclear power plants do not release carbon dioxide through electricity generation, there are risks associated with nuclear waste and safety concerns associated with the use of nuclear sources.", "title": "Environmental concerns" }, { "paragraph_id": 28, "text": "Per unit of electricity generated coal and gas-fired power life-cycle greenhouse gas emissions are almost always at least ten times that of other generation methods.", "title": "Environmental concerns" }, { "paragraph_id": 29, "text": "Centralised generation is electricity generation by large-scale centralised facilities, sent through transmission lines to consumers. These facilities are usually located far away from consumers and distribute the electricity through high voltage transmission lines to a substation, where it is then distributed to consumers; the basic concept being that multi-megawatt or gigawatt scale large stations create electricity for a large number of people. The vast majority of electricity used is created from centralised generation. Most centralised power generation comes from large power plants run by fossil fuels such as coal or natural gas, though nuclear or large hydroelectricity plants are also commonly used. Centralised generation is fundamentally the opposite of distributed generation. Distributed generation is the small-scale generation of electricity to smaller groups of consumers. This can also include independently producing electricity by either solar or wind power. In recent years distributed generation as has seen a spark in popularity due to its propensity to use renewable energy generation methods such as rooftop solar.", "title": "Centralised and distributed generation" }, { "paragraph_id": 30, "text": "Centralised energy sources are large power plants that produce huge amounts of electricity to a large number of consumers. Most power plants used in centralised generation are thermal power plants meaning that they use a fuel to heat steam to produce a pressurised gas which in turn spins a turbine and generates electricity. This is the traditional way of producing energy. This process relies on several forms of technology to produce widespread electricity, these being natural coal, gas and nuclear forms of thermal generation. More recently solar and wind have become large scale.", "title": "Technologies" }, { "paragraph_id": 31, "text": "A photovoltaic power station, also known as a solar park, solar farm, or solar power plant, is a large-scale grid-connected photovoltaic power system (PV system) designed for the supply of merchant power. They are different from most building-mounted and other decentralized solar power because they supply power at the utility level, rather than to a local user or users. Utility-scale solar is sometimes used to describe this type of project.", "title": "Technologies" }, { "paragraph_id": 32, "text": "This approach differs from concentrated solar power, the other major large-scale solar generation technology, which uses heat to drive a variety of conventional generator systems. Both approaches have their own advantages and disadvantages, but to date, for a variety of reasons, photovoltaic technology has seen much wider use. As of 2019, about 97% of utility-scale solar power capacity was PV.", "title": "Technologies" }, { "paragraph_id": 33, "text": "In some countries, the nameplate capacity of photovoltaic power stations is rated in megawatt-peak (MWp), which refers to the solar array's theoretical maximum DC power output. In other countries, the manufacturer states the surface and the efficiency. However, Canada, Japan, Spain, and the United States often specify using the converted lower nominal power output in MWAC, a measure more directly comparable to other forms of power generation. Most solar parks are developed at a scale of at least 1 MWp. As of 2018, the world's largest operating photovoltaic power stations surpassed 1 gigawatt. At the end of 2019, about 9,000 solar farms were larger than 4 MWAC (utility scale), with a combined capacity of over 220 GWAC.", "title": "Technologies" }, { "paragraph_id": 34, "text": "A wind farm or wind park, also called a wind power station or wind power plant, is a group of wind turbines in the same location used to produce electricity. Wind farms vary in size from a small number of turbines to several hundred wind turbines covering an extensive area. Wind farms can be either onshore or offshore.", "title": "Technologies" }, { "paragraph_id": 35, "text": "Many of the largest operational onshore wind farms are located in China, India, and the United States. For example, the largest wind farm in the world, Gansu Wind Farm in China had a capacity of over 6,000 MW by 2012, with a goal of 20,000 MW by 2020. As of December 2020, the 1218 MW Hornsea Wind Farm in the UK is the largest offshore wind farm in the world. Individual wind turbine designs continue to increase in power, resulting in fewer turbines being needed for the same total output.", "title": "Technologies" }, { "paragraph_id": 36, "text": "A coal-fired power station or coal power plant is a thermal power station which burns coal to generate electricity. Worldwide there are over 2,400 coal-fired power stations, totaling over 2,000 gigawatts capacity. They generate about a third of the world's electricity, but cause many illnesses and the most early deaths, mainly from air pollution.", "title": "Technologies" }, { "paragraph_id": 37, "text": "A coal-fired power station is a type of fossil fuel power station. The coal is usually pulverized and then burned in a pulverized coal-fired boiler. The furnace heat converts boiler water to steam, which is then used to spin turbines that turn generators. Thus chemical energy stored in coal is converted successively into thermal energy, mechanical energy and, finally, electrical energy.", "title": "Technologies" }, { "paragraph_id": 38, "text": "Natural gas is ignited to create pressurised gas which is used to spin turbines to generate electricity. Natural gas plants use a gas turbine where natural gas is added along with oxygen which in turn combusts and expands through the turbine to force a generator to spin.", "title": "Technologies" }, { "paragraph_id": 39, "text": "Natural gas power plants are more efficient than coal power generation, they however contribute to climate change but not as highly as coal generation. Not only do they produce carbon dioxide from the ignition of natural gas, but also the extraction of gas when mined releases a significant amount of methane into the atmosphere.", "title": "Technologies" }, { "paragraph_id": 40, "text": "Nuclear power plants create electricity through steam turbines where the heat input is from the process of nuclear fission. Currently, nuclear power produces 11% of all electricity in the world. Most nuclear reactors use uranium as a source of fuel. In a process called nuclear fission, energy, in the form of heat, is released when nuclear atoms are split. Electricity is created through the use of a nuclear reactor where heat produced by nuclear fission is used to produce steam which in turn spins turbines and powers the generators. Although there are several types of nuclear reactors, all fundamentally use this process.", "title": "Technologies" }, { "paragraph_id": 41, "text": "Normal emissions due to nuclear power plants are primarily waste heat and radioactive spent fuel. In a reactor accident, significant amounts of radioisotopes can be released to the environment, posing a long term hazard to life. This hazard has been a continuing concern of environmentalists. Accidents such as the Three Mile Island accident, Chernobyl disaster and the Fukushima nuclear disaster illustrate this problem.", "title": "Technologies" }, { "paragraph_id": 42, "text": "The table lists 45 countries with their total electricity capacities. The data is from 2022. According to the Energy Information Administration, the total global electricity capacity in 2022 was nearly 8.9 terawatt (TW), more than four times the total global electricity capacity in 1981. The global average per-capita electricity capacity was about 1,120 watts in 2022, nearly two and a half times the global average per-capita electricity capacity in 1981. Iceland has the highest installed capacity per capita in the world, at about 8,990 watts. All developed countries have an average per-capita electricity capacity above the global average per-capita electricity capacity, with the United Kingdom having the lowest average per-capita electricity capacity of all other developed countries.", "title": "Electricity generation capacity by country" } ]
Electricity generation is the process of generating electric power from sources of primary energy. For utilities in the electric power industry, it is the stage prior to its delivery to end users or its storage. Usable electricity is not freely available in nature, so it must be "produced". Production is carried out in power stations. Electricity is most often generated at a power plant by electromechanical generators, primarily driven by heat engines fueled by combustion or nuclear fission but also by other means such as the kinetic energy of flowing water and wind. Other energy sources include solar photovoltaics and geothermal power. There are also exotic and speculative methods to recover energy, such as proposed fusion reactor designs which aim to directly extract energy from intense magnetic fields generated by fast-moving charged particles generated by the fusion reaction. Phasing out coal-fired power stations and eventually gas-fired power stations, or, if practical, capturing their greenhouse gas emissions, is an important part of the energy transformation required to limit climate change. Vastly more solar power and wind power is forecast to be required, with electricity demand increasing strongly with further electrification of transport, homes and industry. However, in 2023, it was reported that the global electricity supply was approaching peak CO2 emissions thanks to the growth of solar and wind power.
2001-07-31T18:55:18Z
2023-12-31T18:10:30Z
[ "Template:Citation needed", "Template:Flagicon", "Template:Power engineering", "Template:Main", "Template:Portal", "Template:Electricity generation", "Template:Latest pie chart of world power by source", "Template:See also", "Template:Excerpt", "Template:Mw-datatable", "Template:Cite web", "Template:Cite journal", "Template:Authority control", "Template:Cn", "Template:Multiple image", "Template:Update", "Template:Srn", "Template:Reflist", "Template:Cite news", "Template:Webarchive", "Template:Cite book", "Template:Short description" ]
https://en.wikipedia.org/wiki/Electricity_generation
9,541
Design of experiments
The design of experiments (DOE or DOX), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation. In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment. Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity. Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience. A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics. Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights. Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s. Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less). The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952. A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research. This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs. Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by We consider two different experiments: The question of design of experiments is: which experiment is better? The variance of the estimate X1 of θ1 is σ if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other. Many problems of the design of experiments involve combinatorial designs, as in this example and others. False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields. Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention. Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance. P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible. Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers. Clear and complete documentation of the experimental methodology is also important in order to support replication of results. An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section: The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used. In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design (Adér & Mellenbergh, 2008). It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments. To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned. One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time. Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations. In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards. Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics. As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space. Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn. The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Some discussion of experimental design in the context of system identification (model building for static or dynamic models) is given in and. Laws and ethical considerations preclude some carefully designed experiments with human subjects. Legal constraints are dependent on jurisdiction. Constraints may involve institutional review boards, informed consent and confidentiality affecting both clinical (medical) trials and behavioral and social science experiments. In the field of toxicology, for example, experimentation is performed on laboratory animals with the goal of defining safe exposure limits for humans. Balancing the constraints are views from the medical field. Regarding the randomization of patients, "... if no one knows which therapy is better, there is no ethical imperative to use one therapy or another." (p 380) Regarding experimental design, "...it is clearly not ethical to place subjects at risk to collect data in a poorly designed study when this situation can be easily avoided...". (p 393)
[ { "paragraph_id": 0, "text": "The design of experiments (DOE or DOX), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.", "title": "" }, { "paragraph_id": 1, "text": "In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as \"input variables\" or \"predictor variables.\" The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as \"output variables\" or \"response variables.\" The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment.", "title": "" }, { "paragraph_id": 2, "text": "Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity.", "title": "" }, { "paragraph_id": 3, "text": "Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience.", "title": "" }, { "paragraph_id": 4, "text": "A theory of statistical inference was developed by Charles S. Peirce in \"Illustrations of the Logic of Science\" (1877–1878) and \"A Theory of Probable Inference\" (1883), two publications that emphasized the importance of randomization-based inference in statistics.", "title": "History" }, { "paragraph_id": 5, "text": "Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights. Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.", "title": "History" }, { "paragraph_id": 6, "text": "Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less).", "title": "History" }, { "paragraph_id": 7, "text": "The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the \"two-armed bandit\", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952.", "title": "History" }, { "paragraph_id": 8, "text": "A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.", "title": "Fisher's principles" }, { "paragraph_id": 9, "text": "This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs.", "title": "Example" }, { "paragraph_id": 10, "text": "Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by", "title": "Example" }, { "paragraph_id": 11, "text": "We consider two different experiments:", "title": "Example" }, { "paragraph_id": 12, "text": "The question of design of experiments is: which experiment is better?", "title": "Example" }, { "paragraph_id": 13, "text": "The variance of the estimate X1 of θ1 is σ if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other.", "title": "Example" }, { "paragraph_id": 14, "text": "Many problems of the design of experiments involve combinatorial designs, as in this example and others.", "title": "Example" }, { "paragraph_id": 15, "text": "False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields.", "title": "Avoiding false positives" }, { "paragraph_id": 16, "text": "Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention.", "title": "Avoiding false positives" }, { "paragraph_id": 17, "text": "Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious \"p-hacking\": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance.", "title": "Avoiding false positives" }, { "paragraph_id": 18, "text": "P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible.", "title": "Avoiding false positives" }, { "paragraph_id": 19, "text": "Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers.", "title": "Avoiding false positives" }, { "paragraph_id": 20, "text": "Clear and complete documentation of the experimental methodology is also important in order to support replication of results.", "title": "Avoiding false positives" }, { "paragraph_id": 21, "text": "An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section:", "title": "Discussion topics when setting up an experimental design" }, { "paragraph_id": 22, "text": "The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used.", "title": "Discussion topics when setting up an experimental design" }, { "paragraph_id": 23, "text": "In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design (Adér & Mellenbergh, 2008).", "title": "Causal attributions" }, { "paragraph_id": 24, "text": "It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments. To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned.", "title": "Statistical control" }, { "paragraph_id": 25, "text": "One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time.", "title": "Statistical control" }, { "paragraph_id": 26, "text": "Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations.", "title": "Experimental designs after Fisher" }, { "paragraph_id": 27, "text": "In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards.", "title": "Experimental designs after Fisher" }, { "paragraph_id": 28, "text": "Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics.", "title": "Experimental designs after Fisher" }, { "paragraph_id": 29, "text": "As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space.", "title": "Experimental designs after Fisher" }, { "paragraph_id": 30, "text": "Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn.", "title": "Experimental designs after Fisher" }, { "paragraph_id": 31, "text": "The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners.", "title": "Experimental designs after Fisher" }, { "paragraph_id": 32, "text": "Some discussion of experimental design in the context of system identification (model building for static or dynamic models) is given in and.", "title": "Experimental designs after Fisher" }, { "paragraph_id": 33, "text": "Laws and ethical considerations preclude some carefully designed experiments with human subjects. Legal constraints are dependent on jurisdiction. Constraints may involve institutional review boards, informed consent and confidentiality affecting both clinical (medical) trials and behavioral and social science experiments. In the field of toxicology, for example, experimentation is performed on laboratory animals with the goal of defining safe exposure limits for humans. Balancing the constraints are views from the medical field. Regarding the randomization of patients, \"... if no one knows which therapy is better, there is no ethical imperative to use one therapy or another.\" (p 380) Regarding experimental design, \"...it is clearly not ethical to place subjects at risk to collect data in a poorly designed study when this situation can be easily avoided...\". (p 393)", "title": "Human participant constraints" } ]
The design of experiments, also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation. In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points to be used in the experiment. Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity. Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience.
2001-06-29T15:17:47Z
2023-12-11T01:21:01Z
[ "Template:Use dmy dates", "Template:Main", "Template:Cite news", "Template:Statistics", "Template:Six Sigma Tools", "Template:Original research", "Template:Div col end", "Template:Webarchive", "Template:Citation", "Template:Commons category", "Template:Experimental design", "Template:Medical research studies", "Template:Technical inline", "Template:Reflist", "Template:Cite web", "Template:Cite journal", "Template:Cite book", "Template:Refend", "Template:Library resources box", "Template:-", "Template:See also", "Template:Div col", "Template:ISBN", "Template:Refbegin", "Template:Authority control", "Template:Short description" ]
https://en.wikipedia.org/wiki/Design_of_experiments
9,545
Empirical research
Empirical research is research using empirical evidence. It is also a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values some research more than other kinds. Empirical evidence (the record of one's direct observations or experiences) can be analyzed quantitatively or qualitatively. Quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected (usually called data). Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions that cannot be studied in laboratory settings, particularly in the social sciences and in education. In some fields, quantitative research may begin with a research question (e.g., "Does listening to vocal music during the learning of a word list have an effect on later memory for these words?") which is tested through experimentation. Usually, the researcher has a certain theory regarding the topic under investigation. Based on this theory, statements or hypotheses will be proposed (e.g., "Listening to vocal music has a negative effect on learning a word list."). From these hypotheses, predictions about specific events are derived (e.g., "People who study a word list while listening to vocal music will remember fewer words on a later memory test than people who study a word list in silence."). These predictions can then be tested with a suitable experiment. Depending on the outcomes of the experiment, the theory on which the hypotheses and predictions were based will be supported or not, or may need to be modified and then subjected to further testing. The term empirical was originally used to refer to certain ancient Greek practitioners of medicine who rejected adherence to the dogmatic doctrines of the day, preferring instead to rely on the observation of phenomena as perceived in experience. Later empiricism referred to a theory of knowledge in philosophy which adheres to the principle that knowledge arises from experience and evidence gathered specifically using the senses. In scientific use, the term empirical refers to the gathering of data using only evidence that is observable by the senses or in some cases using calibrated scientific instruments. What early philosophers described as empiricist and empirical research have in common is the dependence on observable data to formulate and test theories and come to conclusions. The researcher attempts to describe accurately the interaction between the instrument (or the human senses) and the entity being observed. If instrumentation is involved, the researcher is expected to calibrate his/her instrument by applying it to known standard objects and documenting the results before applying it to unknown objects. In other words, it describes the research that has not taken place before and their results. In practice, the accumulation of evidence for or against any particular theory involves planned research designs for the collection of empirical data, and academic rigor plays a large part of judging the merits of research design. Several typologies for such designs have been suggested, one of the most popular of which comes from Campbell and Stanley. They are responsible for popularizing the widely cited distinction among pre-experimental, experimental, and quasi-experimental designs and are staunch advocates of the central role of randomized experiments in educational research. Accurate analysis of data using standardized statistical methods in scientific studies is critical to determining the validity of empirical research. Statistical formulas such as regression, uncertainty coefficient, t-test, chi square, and various types of ANOVA (analyses of variance) are fundamental to forming logical, valid conclusions. If empirical data reach significance under the appropriate statistical formula, the research hypothesis is supported. If not, the null hypothesis is supported (or, more accurately, not rejected), meaning no effect of the independent variable(s) was observed on the dependent variable(s). The outcome of empirical research using statistical hypothesis testing is never proof. It can only support a hypothesis, reject it, or do neither. These methods yield only probabilities. Among scientific researchers, empirical evidence (as distinct from empirical research) refers to objective evidence that appears the same regardless of the observer. For example, a thermometer will not display different temperatures for each individual who observes it. Temperature, as measured by an accurate, well calibrated thermometer, is empirical evidence. By contrast, non-empirical evidence is subjective, depending on the observer. Following the previous example, observer A might truthfully report that a room is warm, while observer B might truthfully report that the same room is cool, though both observe the same reading on the thermometer. The use of empirical evidence negates this effect of personal (i.e., subjective) experience or time. The varying perception of empiricism and rationalism shows concern with the limit to which there is dependency on experience of sense as an effort of gaining knowledge. According to rationalism, there are a number of different ways in which sense experience is gained independently for the knowledge and concepts. According to empiricism, sense experience is considered as the main source of every piece of knowledge and the concepts. In general, rationalists are known for the development of their own views following two different way. First, the key argument can be placed that there are cases in which the content of knowledge or concepts end up outstripping the information. This outstripped information is provided by the sense experience (Hjørland, 2010, 2). Second, there is construction of accounts as to how reasoning helps in the provision of addition knowledge about a specific or broader scope. Empiricists are known to be presenting complementary senses related to thought. First, there is development of accounts of how there is provision of information by experience that is cited by rationalists. This is insofar for having it in the initial place. At times, empiricists tend to be opting skepticism as an option of rationalism. If experience is not helpful in the provision of knowledge or concept cited by rationalists, then they do not exist (Pearce, 2010, 35). Second, empiricists hold the tendency of attacking the accounts of rationalists while considering reasoning to be an important source of knowledge or concepts. The overall disagreement between empiricists and rationalists shows primary concerns in how there is gaining of knowledge with respect to the sources of knowledge and concept. In some of the cases, disagreement at the point of gaining knowledge results in the provision of conflicting responses to other aspects as well. There might be a disagreement in the overall feature of warrant, while limiting the knowledge and thought. Empiricists are known for sharing the view that there is no existence of innate knowledge and rather that is derivation of knowledge out of experience. These experiences are either reasoned using the mind or sensed through the five senses human possess (Bernard, 2011, 5). On the other hand, rationalists are known to be sharing the view that there is existence of innate knowledge and this is different for the objects of innate knowledge being chosen. In order to follow rationalism, there must be adoption of one of the three claims related to the theory that are deduction or intuition, innate knowledge, and innate concept. The more there is removal of concept from mental operations and experience, there can be performance over experience with increased plausibility in being innate. Further ahead, empiricism in context with a specific subject provides a rejection of the corresponding version related to innate knowledge and deduction or intuition (Weiskopf, 2008, 16). Insofar as there is acknowledgement of concepts and knowledge within the area of subject, the knowledge has major dependence on experience through human senses. A.D. de Groot's empirical cycle:
[ { "paragraph_id": 0, "text": "Empirical research is research using empirical evidence. It is also a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values some research more than other kinds. Empirical evidence (the record of one's direct observations or experiences) can be analyzed quantitatively or qualitatively. Quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected (usually called data). Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions that cannot be studied in laboratory settings, particularly in the social sciences and in education.", "title": "" }, { "paragraph_id": 1, "text": "In some fields, quantitative research may begin with a research question (e.g., \"Does listening to vocal music during the learning of a word list have an effect on later memory for these words?\") which is tested through experimentation. Usually, the researcher has a certain theory regarding the topic under investigation. Based on this theory, statements or hypotheses will be proposed (e.g., \"Listening to vocal music has a negative effect on learning a word list.\"). From these hypotheses, predictions about specific events are derived (e.g., \"People who study a word list while listening to vocal music will remember fewer words on a later memory test than people who study a word list in silence.\"). These predictions can then be tested with a suitable experiment. Depending on the outcomes of the experiment, the theory on which the hypotheses and predictions were based will be supported or not, or may need to be modified and then subjected to further testing.", "title": "" }, { "paragraph_id": 2, "text": "The term empirical was originally used to refer to certain ancient Greek practitioners of medicine who rejected adherence to the dogmatic doctrines of the day, preferring instead to rely on the observation of phenomena as perceived in experience. Later empiricism referred to a theory of knowledge in philosophy which adheres to the principle that knowledge arises from experience and evidence gathered specifically using the senses. In scientific use, the term empirical refers to the gathering of data using only evidence that is observable by the senses or in some cases using calibrated scientific instruments. What early philosophers described as empiricist and empirical research have in common is the dependence on observable data to formulate and test theories and come to conclusions.", "title": "Terminology" }, { "paragraph_id": 3, "text": "The researcher attempts to describe accurately the interaction between the instrument (or the human senses) and the entity being observed. If instrumentation is involved, the researcher is expected to calibrate his/her instrument by applying it to known standard objects and documenting the results before applying it to unknown objects. In other words, it describes the research that has not taken place before and their results.", "title": "Usage" }, { "paragraph_id": 4, "text": "In practice, the accumulation of evidence for or against any particular theory involves planned research designs for the collection of empirical data, and academic rigor plays a large part of judging the merits of research design. Several typologies for such designs have been suggested, one of the most popular of which comes from Campbell and Stanley. They are responsible for popularizing the widely cited distinction among pre-experimental, experimental, and quasi-experimental designs and are staunch advocates of the central role of randomized experiments in educational research.", "title": "Usage" }, { "paragraph_id": 5, "text": "Accurate analysis of data using standardized statistical methods in scientific studies is critical to determining the validity of empirical research. Statistical formulas such as regression, uncertainty coefficient, t-test, chi square, and various types of ANOVA (analyses of variance) are fundamental to forming logical, valid conclusions. If empirical data reach significance under the appropriate statistical formula, the research hypothesis is supported. If not, the null hypothesis is supported (or, more accurately, not rejected), meaning no effect of the independent variable(s) was observed on the dependent variable(s).", "title": "Usage" }, { "paragraph_id": 6, "text": "The outcome of empirical research using statistical hypothesis testing is never proof. It can only support a hypothesis, reject it, or do neither. These methods yield only probabilities. Among scientific researchers, empirical evidence (as distinct from empirical research) refers to objective evidence that appears the same regardless of the observer. For example, a thermometer will not display different temperatures for each individual who observes it. Temperature, as measured by an accurate, well calibrated thermometer, is empirical evidence. By contrast, non-empirical evidence is subjective, depending on the observer. Following the previous example, observer A might truthfully report that a room is warm, while observer B might truthfully report that the same room is cool, though both observe the same reading on the thermometer. The use of empirical evidence negates this effect of personal (i.e., subjective) experience or time.", "title": "Usage" }, { "paragraph_id": 7, "text": "The varying perception of empiricism and rationalism shows concern with the limit to which there is dependency on experience of sense as an effort of gaining knowledge. According to rationalism, there are a number of different ways in which sense experience is gained independently for the knowledge and concepts. According to empiricism, sense experience is considered as the main source of every piece of knowledge and the concepts. In general, rationalists are known for the development of their own views following two different way. First, the key argument can be placed that there are cases in which the content of knowledge or concepts end up outstripping the information. This outstripped information is provided by the sense experience (Hjørland, 2010, 2). Second, there is construction of accounts as to how reasoning helps in the provision of addition knowledge about a specific or broader scope. Empiricists are known to be presenting complementary senses related to thought.", "title": "Usage" }, { "paragraph_id": 8, "text": "First, there is development of accounts of how there is provision of information by experience that is cited by rationalists. This is insofar for having it in the initial place. At times, empiricists tend to be opting skepticism as an option of rationalism. If experience is not helpful in the provision of knowledge or concept cited by rationalists, then they do not exist (Pearce, 2010, 35). Second, empiricists hold the tendency of attacking the accounts of rationalists while considering reasoning to be an important source of knowledge or concepts. The overall disagreement between empiricists and rationalists shows primary concerns in how there is gaining of knowledge with respect to the sources of knowledge and concept. In some of the cases, disagreement at the point of gaining knowledge results in the provision of conflicting responses to other aspects as well. There might be a disagreement in the overall feature of warrant, while limiting the knowledge and thought. Empiricists are known for sharing the view that there is no existence of innate knowledge and rather that is derivation of knowledge out of experience. These experiences are either reasoned using the mind or sensed through the five senses human possess (Bernard, 2011, 5). On the other hand, rationalists are known to be sharing the view that there is existence of innate knowledge and this is different for the objects of innate knowledge being chosen.", "title": "Usage" }, { "paragraph_id": 9, "text": "In order to follow rationalism, there must be adoption of one of the three claims related to the theory that are deduction or intuition, innate knowledge, and innate concept. The more there is removal of concept from mental operations and experience, there can be performance over experience with increased plausibility in being innate. Further ahead, empiricism in context with a specific subject provides a rejection of the corresponding version related to innate knowledge and deduction or intuition (Weiskopf, 2008, 16). Insofar as there is acknowledgement of concepts and knowledge within the area of subject, the knowledge has major dependence on experience through human senses.", "title": "Usage" }, { "paragraph_id": 10, "text": "A.D. de Groot's empirical cycle:", "title": "Empirical cycle" } ]
Empirical research is research using empirical evidence. It is also a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values some research more than other kinds. Empirical evidence can be analyzed quantitatively or qualitatively. Quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected. Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions that cannot be studied in laboratory settings, particularly in the social sciences and in education. In some fields, quantitative research may begin with a research question which is tested through experimentation. Usually, the researcher has a certain theory regarding the topic under investigation. Based on this theory, statements or hypotheses will be proposed. From these hypotheses, predictions about specific events are derived. These predictions can then be tested with a suitable experiment. Depending on the outcomes of the experiment, the theory on which the hypotheses and predictions were based will be supported or not, or may need to be modified and then subjected to further testing.
2001-03-24T16:11:53Z
2023-12-03T15:38:21Z
[ "Template:Authority control", "Template:Short description", "Template:Distinguish", "Template:Reflist", "Template:ISBN", "Template:Wiktionary-inline" ]
https://en.wikipedia.org/wiki/Empirical_research
9,546
Engineering statistics
Engineering statistics combines engineering and statistics using scientific methods for analyzing data. Engineering statistics involves data concerning manufacturing processes such as: component dimensions, tolerances, type of material, and fabrication process control. There are many methods used in engineering analysis and they are often displayed as histograms to give a visual of the data as opposed to being just numerical. Examples of methods are: Engineering statistics dates back to 1000 B.C. when the Abacus was developed as means to calculate numerical data. In the 1600s, the development of information processing to systematically analyze and process data began. In 1654, the Slide Rule technique was developed by Robert Bissaker for advanced data calculations. In 1833, a British mathematician named Charles Babbage designed the idea of an automatic computer which inspired developers at Harvard University and IBM to design the first mechanical automatic-sequence-controlled calculator called MARK I. The integration of computers and calculators into the industry brought about a more efficient means of analyzing data and the beginning of engineering statistics. A factorial experiment is one where, contrary to the standard experimental philosophy of changing only one independent variable and holding everything else constant, multiple independent variables are tested at the same time. With this design, statistical engineers can see both the direct effects of one independent variable (main effect), as well as potential interaction effects that arise when multiple independent variables provide a different result when together than either would on its own. Six Sigma is a set of techniques to improve the reliability of a manufacturing process. Ideally, all products will have the exact same specifications equivalent to what was desired, but countless imperfections of real-world manufacturing makes this impossible. The as-built specifications of a product are assumed to be centered around a mean, with each individual product deviating some amount away from that mean in a normal distribution. The goal of Six Sigma is to ensure that the acceptable specification limits are six standard deviations away from the mean of the distribution; in other words, that each step of the manufacturing process has at most a 0.00034% chance of producing a defect.
[ { "paragraph_id": 0, "text": "Engineering statistics combines engineering and statistics using scientific methods for analyzing data. Engineering statistics involves data concerning manufacturing processes such as: component dimensions, tolerances, type of material, and fabrication process control. There are many methods used in engineering analysis and they are often displayed as histograms to give a visual of the data as opposed to being just numerical. Examples of methods are:", "title": "" }, { "paragraph_id": 1, "text": "Engineering statistics dates back to 1000 B.C. when the Abacus was developed as means to calculate numerical data. In the 1600s, the development of information processing to systematically analyze and process data began. In 1654, the Slide Rule technique was developed by Robert Bissaker for advanced data calculations. In 1833, a British mathematician named Charles Babbage designed the idea of an automatic computer which inspired developers at Harvard University and IBM to design the first mechanical automatic-sequence-controlled calculator called MARK I. The integration of computers and calculators into the industry brought about a more efficient means of analyzing data and the beginning of engineering statistics.", "title": "History" }, { "paragraph_id": 2, "text": "A factorial experiment is one where, contrary to the standard experimental philosophy of changing only one independent variable and holding everything else constant, multiple independent variables are tested at the same time. With this design, statistical engineers can see both the direct effects of one independent variable (main effect), as well as potential interaction effects that arise when multiple independent variables provide a different result when together than either would on its own.", "title": "Examples" }, { "paragraph_id": 3, "text": "Six Sigma is a set of techniques to improve the reliability of a manufacturing process. Ideally, all products will have the exact same specifications equivalent to what was desired, but countless imperfections of real-world manufacturing makes this impossible. The as-built specifications of a product are assumed to be centered around a mean, with each individual product deviating some amount away from that mean in a normal distribution. The goal of Six Sigma is to ensure that the acceptable specification limits are six standard deviations away from the mean of the distribution; in other words, that each step of the manufacturing process has at most a 0.00034% chance of producing a defect.", "title": "Examples" } ]
Engineering statistics combines engineering and statistics using scientific methods for analyzing data. Engineering statistics involves data concerning manufacturing processes such as: component dimensions, tolerances, type of material, and fabrication process control. There are many methods used in engineering analysis and they are often displayed as histograms to give a visual of the data as opposed to being just numerical. Examples of methods are: Design of Experiments (DOE) is a methodology for formulating scientific and engineering problems using statistical models. The protocol specifies a randomization procedure for the experiment and specifies the primary data-analysis, particularly in hypothesis testing. In a secondary analysis, the statistical analyst further examines the data to suggest other questions and to help plan future experiments. In engineering applications, the goal is often to optimize a process or product, rather than to subject a scientific hypothesis to test of its predictive adequacy. The use of optimal designs reduces the cost of experimentation. Quality control and process control use statistics as a tool to manage conformance to specifications of manufacturing processes and their products. Time and methods engineering use statistics to study repetitive operations in manufacturing in order to set standards and find optimum manufacturing procedures. Reliability engineering which measures the ability of a system to perform for its intended function and has tools for improving performance. Probabilistic design involving the use of probability in product and system design System identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models.
2023-05-03T19:58:16Z
[ "Template:Commons category-inline", "Template:Statistics", "Template:Short description", "Template:Main articles", "Template:Main", "Template:ISBN", "Template:Cite book", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Engineering_statistics
9,549
Edgar Allan Poe
Edgar Allan Poe (né Edgar Poe; January 19, 1809 – October 7, 1849) was an American writer, poet, author, editor, and literary critic who is best known for his poetry and short stories, particularly his tales of mystery and the macabre. He is widely regarded as a central figure of Romanticism and Gothic fiction in the United States, and of American literature. Poe was one of the country's earliest practitioners of the short story, and is considered the inventor of the detective fiction genre, as well as a significant contributor to the emerging genre of science fiction. He is the first well-known American writer to earn a living through writing alone, resulting in a financially difficult life and career. Poe was born in Boston, the second child of actors David and Elizabeth "Eliza" Poe. His father abandoned the family in 1810, and when his mother died the following year, Poe was taken in by John and Frances Allan of Richmond, Virginia. They never formally adopted him, but he was with them well into young adulthood. He attended the University of Virginia but left after a year due to lack of money. He quarreled with John Allan over the funds for his education, and his gambling debts. In 1827, having enlisted in the United States Army under an assumed name, he published his first collection, Tamerlane and Other Poems, credited only to "a Bostonian". Poe and Allan reached a temporary rapprochement after the death of Allan's wife in 1829. Poe later failed as an officer cadet at West Point, declared a firm wish to be a poet and writer, and parted ways with Allan. Poe switched his focus to prose, and spent the next several years working for literary journals and periodicals, becoming known for his own style of literary criticism. His work forced him to move between several cities, including Baltimore, Philadelphia, and New York City. In 1836, he married his 13-year-old cousin, Virginia Clemm, but she died of tuberculosis in 1847. In January 1845, he published his poem "The Raven" to instant success. He planned for years to produce his own journal The Penn, later renamed The Stylus. But before it began publishing, Poe died in Baltimore in 1849, aged 40, under mysterious circumstances. The cause of his death remains unknown, and has been variously attributed to many causes including disease, alcoholism, substance abuse, and suicide. Poe and his works influenced literature around the world, as well as specialized fields such as cosmology and cryptography. He and his work appear throughout popular culture in literature, music, films, and television. A number of his homes are dedicated museums. The Mystery Writers of America present an annual Edgar Award for distinguished work in the mystery genre. Edgar Poe was born in Boston, Massachusetts, on January 19, 1809, the second child of American actor David Poe Jr. and English-born actress Elizabeth Arnold Hopkins Poe. He had an elder brother, Henry, and a younger sister, Rosalie. Their grandfather, David Poe, had emigrated from County Cavan, Ireland, around 1750. His father abandoned the family in 1810, and his mother died a year later from pulmonary tuberculosis. Poe was then taken into the home of John Allan, a successful merchant in Richmond, Virginia, who dealt in a variety of goods, including cloth, wheat, tombstones, tobacco, and slaves. The Allans served as a foster family and gave him the name "Edgar Allan Poe", although they never formally adopted him. The Allan family had Poe baptized into the Episcopal Church in 1812. John Allan alternately spoiled and aggressively disciplined his foster son. The family sailed to the United Kingdom in 1815, and Poe attended the grammar school for a short period in Irvine, Ayrshire, Scotland, where Allan was born, before rejoining the family in London in 1816. There he studied at a boarding school in Chelsea until summer 1817. He was subsequently entered at the Reverend John Bransby's Manor House School at Stoke Newington, then a suburb 4 miles (6 km) north of London. Poe moved with the Allans back to Richmond in 1820. In 1824, he served as the lieutenant of the Richmond youth honor guard as the city celebrated the visit of the Marquis de Lafayette. In March 1825, Allan's uncle and business benefactor William Galt died, who was said to be one of the wealthiest men in Richmond, leaving Allan several acres of real estate. The inheritance was estimated at $750,000 (equivalent to $19,000,000 in 2022). By summer 1825, Allan celebrated his expansive wealth by purchasing a two-story brick house called Moldavia. Poe may have become engaged to Sarah Elmira Royster before he registered at the University of Virginia in February 1826 to study ancient and modern languages. The university was in its infancy, established on the ideals of its founder Thomas Jefferson. It had strict rules against gambling, horses, guns, tobacco, and alcohol, but these rules were mostly ignored. Jefferson enacted a system of student self-government, allowing students to choose their own studies, make their own arrangements for boarding, and report all wrongdoing to the faculty. The unique system was still in chaos, and there was a high dropout rate. During his time there, Poe lost touch with Royster and also became estranged from his foster father over gambling debts. He claimed that Allan had not given him sufficient money to register for classes, purchase texts, and procure and furnish a dormitory. Allan did send additional money and clothes, but Poe's debts increased. Poe gave up on the university after a year but did not feel welcome returning to Richmond, especially when he learned that his sweetheart Royster had married another man, Alexander Shelton. He traveled to Boston in April 1827, sustaining himself with odd jobs as a clerk and newspaper writer, and started using the pseudonym Henri Le Rennet during this period. Poe was unable to support himself, so he enlisted in the United States Army as a private on May 27, 1827, using the name "Edgar A. Perry". He claimed that he was 22 years old even though he was 18. He first served at Fort Independence in Boston Harbor for five dollars a month. That year, he released his first book, a 40-page collection of poetry titled Tamerlane and Other Poems, attributed with the byline "by a Bostonian". Only 50 copies were printed, and the book received virtually no attention. Poe's regiment was posted to Fort Moultrie in Charleston, South Carolina, and traveled by ship on the brig Waltham on November 8, 1827. Poe was promoted to "artificer", an enlisted tradesman who prepared shells for artillery, and had his monthly pay doubled. He served for two years and attained the rank of Sergeant Major for Artillery, the highest rank that a non-commissioned officer could achieve; he then sought to end his five-year enlistment early. He revealed his real name and his circumstances to his commanding officer, Lieutenant Howard, who would allow Poe to be discharged only if he reconciled with Allan. Poe wrote a letter to Allan, who was unsympathetic and spent several months ignoring Poe's pleas; Allan may not have written to Poe even to make him aware of his foster mother's illness. Frances Allan died on February 28, 1829, and Poe visited the day after her burial. Perhaps softened by his wife's death, Allan agreed to support Poe's attempt to be discharged in order to receive an appointment to the United States Military Academy at West Point, New York. Poe was finally discharged on April 15, 1829, after securing a replacement to finish his enlisted term for him. Before entering West Point, he moved to Baltimore for a time to stay with his widowed aunt Maria Clemm, her daughter Virginia Eliza Clemm (Poe's first cousin), his brother Henry, and his invalid grandmother Elizabeth Cairnes Poe. In September of that year, Poe received "the very first words of encouragement I ever remember to have heard" in a review of his poetry by influential critic John Neal, prompting Poe to dedicate one of the poems to Neal in his second book Al Aaraaf, Tamerlane and Minor Poems, published in Baltimore in 1829. Poe traveled to West Point and matriculated as a cadet on July 1, 1830. In October 1830, Allan married his second wife Louisa Patterson. The marriage and bitter quarrels with Poe over the children born to Allan out of extramarital affairs led to the foster father finally disowning Poe. Poe decided to leave West Point by purposely getting court-martialed. On February 8, 1831, he was tried for gross neglect of duty and disobedience of orders for refusing to attend formations, classes, or church. He tactically pleaded not guilty to induce dismissal, knowing that he would be found guilty. Poe left for New York in February 1831 and released a third volume of poems, simply titled Poems. The book was financed with help from his fellow cadets at West Point, many of whom donated 75 cents to the cause, raising a total of $170. They may have been expecting verses similar to the satirical ones Poe had written about commanding officers. It was printed by Elam Bliss of New York, labeled as "Second Edition", and including a page saying, "To the U.S. Corps of Cadets this volume is respectfully dedicated". The book once again reprinted the long poems "Tamerlane" and "Al Aaraaf" but also six previously unpublished poems, including early versions of "To Helen", "Israfel", and "The City in the Sea". Poe returned to Baltimore to his aunt, brother, and cousin in March 1831. His elder brother Henry had been in ill health, in part due to problems with alcoholism, and he died on August 1, 1831. After his brother's death, Poe began more earnest attempts to start his career as a writer, but he chose a difficult time in American publishing to do so. He was one of the first Americans to live by writing alone and was hampered by the lack of an international copyright law. American publishers often produced unauthorized copies of British works rather than paying for new work by Americans. The industry was also particularly hurt by the Panic of 1837. There was a booming growth in American periodicals around this time, fueled in part by new technology, but many did not last beyond a few issues. Publishers often refused to pay their writers or paid them much later than they promised, and Poe repeatedly resorted to humiliating pleas for money and other assistance. After his early attempts at poetry, Poe had turned his attention to prose, likely based on John Neal's critiques in The Yankee magazine. He placed a few stories with a Philadelphia publication and began work on his only drama Politian. The Baltimore Saturday Visiter awarded him a prize in October 1833 for his short story "MS. Found in a Bottle". The story brought him to the attention of John P. Kennedy, a Baltimorean of considerable means who helped Poe place some of his stories and introduced him to Thomas W. White, editor of the Southern Literary Messenger in Richmond. In 1835, Poe became assistant editor of the 'Southern Literary Messenger, but White discharged him within a few weeks for being drunk on the job. Poe returned to Baltimore, where he obtained a license to marry his cousin Virginia on September 22, 1835, though it is unknown if they were married at that time. He was 26 and she was 13. Poe was reinstated by White after promising good behavior, and he returned to Richmond with Virginia and her mother. He remained at the Messenger until January 1837. During this period, Poe claimed that its circulation increased from 700 to 3,500. He published several poems, book reviews, critiques, and stories in the paper. On May 16, 1836, he and Virginia held a Presbyterian wedding ceremony performed by Amasa Converse at their Richmond boarding house, with a witness falsely attesting Clemm's age as 21. In 1838, Poe relocated to Philadelphia, where he lived at four different residences between 1838 and 1844, one of which at 532 N. 7th Street has been preserved as a National Historic Landmark. That same year, Poe's novel The Narrative of Arthur Gordon Pym of Nantucket was published and widely reviewed. In the summer of 1839, he became assistant editor of Burton's Gentleman's Magazine. He published numerous articles, stories, and reviews, enhancing his reputation as a trenchant critic which he had established at the Messenger. Also in 1839, the collection Tales of the Grotesque and Arabesque was published in two volumes, though he made little money from it and it received mixed reviews. In June 1840, Poe published a prospectus announcing his intentions to start his own journal called The Stylus, although he originally intended to call it The Penn, since it would have been based in Philadelphia. He bought advertising space for his prospectus in the June 6, 1840, issue of Philadelphia's Saturday Evening Post: "Prospectus of the Penn Magazine, a Monthly Literary journal to be edited and published in the city of Philadelphia by Edgar A. Poe." The journal was never produced before Poe's death. Poe left Burton's after about a year and found a position as writer and co-editor at Graham's Magazine, a successful monthly publication. In the last number of Graham's for 1841, Poe was among the co-signatories to an editorial note of celebration of the tremendous success the magazine had achieved in the past year: "Perhaps the editors of no magazine, either in America or in Europe, ever sat down, at the close of a year, to contemplate the progress of their work with more satisfaction than we do now. Our success has been unexampled, almost incredible. We may assert without fear of contradiction that no periodical ever witnessed the same increase during so short a period." Around this time, Poe attempted to secure a position in the administration of John Tyler, claiming that he was a member of the Whig Party. He hoped to be appointed to the United States Custom House in Philadelphia with help from President Tyler's son Robert, an acquaintance of Poe's friend Frederick Thomas. Poe failed to show up for a meeting with Thomas to discuss the appointment in mid-September 1842, claiming to have been sick, though Thomas believed that he had been drunk. Poe was promised an appointment, but all positions were filled by others. One evening in January 1842, Virginia showed the first signs of consumption, or tuberculosis, while singing and playing the piano, which Poe described as breaking a blood vessel in her throat. She only partially recovered, and Poe began to drink more heavily under the stress of her illness. He left Graham's and attempted to find a new position, for a time angling for a government post. He returned to New York where he worked briefly at the Evening Mirror before becoming editor of the Broadway Journal, and later its owner. There Poe alienated himself from other writers by publicly accusing Henry Wadsworth Longfellow of plagiarism, though Longfellow never responded. On January 29, 1845, Poe's poem "The Raven" appeared in the Evening Mirror and became a popular sensation. It made Poe a household name almost instantly, though he was paid only $9 for its publication. It was concurrently published in The American Review: A Whig Journal under the pseudonym "Quarles". The Broadway Journal failed in 1846, and Poe moved to a cottage in Fordham, New York, in the Bronx. That home, now known as the Edgar Allan Poe Cottage, was relocated in later years to a park near the southeast corner of the Grand Concourse and Kingsbridge Road. Nearby, Poe befriended the Jesuits at St. John's College, now Fordham University. Virginia died at the cottage on January 30, 1847. Biographers and critics often suggest that Poe's frequent theme of the "death of a beautiful woman" stems from the repeated loss of women throughout his life, including his wife. Poe was increasingly unstable after his wife's death. He attempted to court poet Sarah Helen Whitman, who lived in Providence, Rhode Island. Their engagement failed, purportedly because of Poe's drinking and erratic behavior. There is also strong evidence that Whitman's mother intervened and did much to derail the relationship. Poe then returned to Richmond and resumed a relationship with his childhood sweetheart Sarah Elmira Royster. On October 3, 1849, Poe was found semiconscious in Baltimore, "in great distress, and... in need of immediate assistance", according to Joseph W. Walker, who found him. He was taken to the Washington Medical College, where he died on Sunday, October 7, 1849, at 5:00 in the morning. Poe was not coherent long enough to explain how he came to be in his dire condition and why he was wearing clothes that were not his own. He is said to have repeatedly called out the name "Reynolds" on the night before his death, though it is unclear to whom he was referring. His attending physician said that Poe's final words were, "Lord help my poor soul". All of the relevant medical records have been lost, including Poe's death certificate. Newspapers at the time reported Poe's death as "congestion of the brain" or "cerebral inflammation", common euphemisms for death from disreputable causes such as alcoholism. The actual cause of death remains a mystery. Speculation has included delirium tremens, heart disease, epilepsy, syphilis, meningeal inflammation, cholera, carbon monoxide poisoning, and rabies. One theory dating from 1872 suggests that Poe's death resulted from cooping, a form of electoral fraud in which citizens were forced to vote for a particular candidate, sometimes leading to violence and even murder. Immediately after Poe's death, his literary rival Rufus Wilmot Griswold wrote a slanted high-profile obituary under a pseudonym, filled with falsehoods that cast Poe as a lunatic, and which described him as a person who "walked the streets, in madness or melancholy, with lips moving in indistinct curses, or with eyes upturned in passionate prayers, (never for himself, for he felt, or professed to feel, that he was already damned)". The long obituary appeared in the New York Tribune, signed "Ludwig" on the day that Poe was buried in Baltimore. It was further published throughout the country. The obituary began, "Edgar Allan Poe is dead. He died in Baltimore the day before yesterday. This announcement will startle many, but few will be grieved by it." "Ludwig" was soon identified as Griswold, an editor, critic, and anthologist who had borne a grudge against Poe since 1842. Griswold somehow became Poe's literary executor and attempted to destroy his enemy's reputation after his death. Griswold wrote a biographical article of Poe called "Memoir of the Author", which he included in an 1850 volume of the collected works. There he depicted Poe as a depraved, drunken, drug-addled madman and included Poe's letters as evidence. Many of his claims were either lies or distortions; for example, it is seriously disputed that Poe was a drug addict. Griswold's book was denounced by those who knew Poe well, including John Neal, who published an article defending Poe and attacking Griswold as a "Rhadamanthus, who is not to be bilked of his fee, a thimble-full of newspaper notoriety". Griswold's book nevertheless became a popularly accepted biographical source. This was in part because it was the only full biography available and was widely reprinted, and in part because readers thrilled at the thought of reading works by an "evil" man. Letters that Griswold presented as proof were later revealed as forgeries. Poe's best-known fiction works are Gothic horror, adhering to the genre's conventions to appeal to the public taste. His most recurring themes deal with questions of death, including its physical signs, the effects of decomposition, concerns of premature burial, the reanimation of the dead, and mourning. Many of his works are generally considered part of the dark romanticism genre, a literary reaction to transcendentalism which Poe strongly disliked. He referred to followers of the transcendental movement as "Frog-Pondians", after the pond on Boston Common, and ridiculed their writings as "metaphor—run mad," lapsing into "obscurity for obscurity's sake" or "mysticism for mysticism's sake". Poe once wrote in a letter to Thomas Holley Chivers that he did not dislike transcendentalists, "only the pretenders and sophists among them". Beyond horror, Poe also wrote satires, humor tales, and hoaxes. For comic effect, he used irony and ludicrous extravagance, often in an attempt to liberate the reader from cultural conformity. "Metzengerstein" is the first story that Poe is known to have published and his first foray into horror, but it was originally intended as a burlesque satirizing the popular genre. Poe also reinvented science fiction, responding in his writing to emerging technologies such as hot air balloons in "The Balloon-Hoax". Poe wrote much of his work using themes aimed specifically at mass-market tastes. To that end, his fiction often included elements of popular pseudosciences, such as phrenology and physiognomy. Poe's writing reflects his literary theories, which he presented in his criticism and also in essays such as "The Poetic Principle". He disliked didacticism and allegory, though he believed that meaning in literature should be an undercurrent just beneath the surface. Works with obvious meanings, he wrote, cease to be art. He believed that work of quality should be brief and focus on a specific single effect. To that end, he believed that the writer should carefully calculate every sentiment and idea. Poe describes his method in writing "The Raven" in the essay "The Philosophy of Composition", and he claims to have strictly followed this method. It has been questioned whether he really followed this system, however. T. S. Eliot said: "It is difficult for us to read that essay without reflecting that if Poe plotted out his poem with such calculation, he might have taken a little more pains over it: the result hardly does credit to the method." Biographer Joseph Wood Krutch described the essay as "a rather highly ingenious exercise in the art of rationalization". During his lifetime, Poe was mostly recognized as a literary critic. Fellow critic James Russell Lowell called him "the most discriminating, philosophical, and fearless critic upon imaginative works who has written in America", suggesting—rhetorically—that he occasionally used prussic acid instead of ink. Poe's caustic reviews earned him the reputation of being a "tomahawk man". A favorite target of Poe's criticism was Boston's acclaimed poet Henry Wadsworth Longfellow, who was often defended by his literary friends in what was later called "The Longfellow War". Poe accused Longfellow of "the heresy of the didactic", writing poetry that was preachy, derivative, and thematically plagiarized. Poe correctly predicted that Longfellow's reputation and style of poetry would decline, concluding, "We grant him high qualities, but deny him the Future". Poe was also known as a writer of fiction and became one of the first American authors of the 19th century to become more popular in Europe than in the United States. Poe is particularly respected in France, in part due to early translations by Charles Baudelaire. Baudelaire's translations became definitive renditions of Poe's work in Continental Europe. Poe's early detective fiction tales featuring C. Auguste Dupin laid the groundwork for future detectives in literature. Sir Arthur Conan Doyle said, "Each [of Poe's detective stories] is a root from which a whole literature has developed.... Where was the detective story until Poe breathed the breath of life into it?" The Mystery Writers of America have named their awards for excellence in the genre the "Edgars". Poe's work also influenced science fiction, notably Jules Verne, who wrote a sequel to Poe's novel The Narrative of Arthur Gordon Pym of Nantucket called An Antarctic Mystery, also known as The Sphinx of the Ice Fields. Science fiction author H. G. Wells noted, "Pym tells what a very intelligent mind could imagine about the south polar region a century ago". In 2013, The Guardian cited Pym as one of the greatest novels ever written in the English language, and noted its influence on later authors such as Doyle, Henry James, B. Traven, and David Morrell. Horror author and historian H. P. Lovecraft was heavily influenced by Poe's horror tales, dedicating an entire section of his long essay, "Supernatural Horror in Literature", to his influence on the genre. In his letters, Lovecraft described Poe as his "God of Fiction". Lovecraft's earlier stories express a significant influence from Poe. A later work, At the Mountains of Madness, quotes him and was influenced by The Narrative of Arthur Gordon Pym of Nantucket. Lovecraft also made extensive use of Poe's unity of effect in his fiction. Alfred Hitchcock once said, "It's because I liked Edgar Allan Poe's stories so much that I began to make suspense films". Many references to Poe's works are present in Vladimir Nabokov's novels. Like many famous artists, Poe's works have spawned imitators. One trend among imitators of Poe has been claims by clairvoyants or psychics to be "channeling" poems from Poe's spirit. One of the most notable of these was Lizzie Doten, who published Poems from the Inner Life in 1863, in which she claimed to have "received" new compositions by Poe's spirit. The compositions were re-workings of famous Poe poems such as "The Bells", but which reflected a new, positive outlook. Poe has also received criticism. This is partly because of the negative perception of his personal character and its influence upon his reputation. William Butler Yeats was occasionally critical of Poe and once called him "vulgar". Transcendentalist Ralph Waldo Emerson reacted to "The Raven" by saying, "I see nothing in it", and derisively referred to Poe as "the jingle man". Aldous Huxley wrote that Poe's writing "falls into vulgarity" by being "too poetical"—the equivalent of wearing a diamond ring on every finger. It is believed that only twelve copies have survived of Poe's first book Tamerlane and Other Poems. In December 2009, one copy sold at Christie's auctioneers in New York City for $662,500, a record price paid for a work of American literature. Eureka: A Prose Poem, an essay written in 1848, included a cosmological theory that presaged the Big Bang theory by 80 years, as well as the first plausible solution to Olbers' paradox. Poe eschewed the scientific method in Eureka and instead wrote from pure intuition. For this reason, he considered it a work of art, not science, but insisted that it was still true and considered it to be his career masterpiece. Even so, Eureka is full of scientific errors. In particular, Poe's suggestions ignored Newtonian principles regarding the density and rotation of planets. Poe had a keen interest in cryptography. He had placed a notice of his abilities in the Philadelphia paper Alexander's Weekly (Express) Messenger, inviting submissions of ciphers which he proceeded to solve. In July 1841, Poe had published an essay called "A Few Words on Secret Writing" in Graham's Magazine. Capitalizing on public interest in the topic, he wrote "The Gold-Bug" incorporating ciphers as an essential part of the story. Poe's success with cryptography relied not so much on his deep knowledge of that field (his method was limited to the simple substitution cryptogram) as on his knowledge of the magazine and newspaper culture. His keen analytical abilities, which were so evident in his detective stories, allowed him to see that the general public was largely ignorant of the methods by which a simple substitution cryptogram can be solved, and he used this to his advantage. The sensation that Poe created with his cryptography stunts played a major role in popularizing cryptograms in newspapers and magazines. Two ciphers he published in 1841 under the name "W. B. Tyler" were not solved until 1992 and 2000 respectively. One was a quote from Joseph Addison's play Cato; the other is probably based on a poem by Hester Thrale. Poe had an influence on cryptography beyond increasing public interest during his lifetime. William Friedman, America's foremost cryptologist, was heavily influenced by Poe. Friedman's initial interest in cryptography came from reading "The Gold-Bug" as a child, an interest that he later put to use in deciphering Japan's PURPLE code during World War II. The historical Edgar Allan Poe has appeared as a fictionalized character, often in order to represent the "mad genius" or "tormented artist" and in order to exploit his personal struggles. Many such depictions also blend in with characters from his stories, suggesting that Poe and his characters share identities. Often, fictional depictions of Poe use his mystery-solving skills in such novels as The Poe Shadow by Matthew Pearl. No childhood home of Poe is still standing, including the Allan family's Moldavia estate. The oldest standing home in Richmond, the Old Stone House, is in use as the Edgar Allan Poe Museum, though Poe never lived there. The collection includes many items that Poe used during his time with the Allan family, and also features several rare first printings of Poe works. 13 West Range is the dorm room that Poe is believed to have used while studying at the University of Virginia in 1826; it is preserved and available for visits. Its upkeep is overseen by a group of students and staff known as the Raven Society. The earliest surviving home in which Poe lived is at 203 North Amity St. in Baltimore, which is preserved as the Edgar Allan Poe House and Museum. Poe is believed to have lived in the home at the age of 23 when he first lived with Maria Clemm and Virginia and possibly his grandmother and possibly his brother William Henry Leonard Poe. It is open to the public and is also the home of the Edgar Allan Poe Society. While in Philadelphia between 1838 and 1844, Poe lived at at least four different residences, including the Indian Queen Hotel at 15 S. 4th Street, at a residence at 16th and Locust Streets, at 2502 Fairmount Street, and then in the Spring Garden section of the city at 532 N. 7th Street, a residence that has been preserved by the National Park Service as the Edgar Allan Poe National Historic Site. Poe's final home in Bronx, New York City, is preserved as the Edgar Allan Poe Cottage. In Boston, a commemorative plaque on Boylston Street is several blocks away from the actual location of Poe's birth. The house which was his birthplace at 62 Carver Street no longer exists; also, the street has since been renamed "Charles Street South". A "square" at the intersection of Broadway, Fayette, and Carver Streets had once been named in his honor, but it disappeared when the streets were rearranged. In 2009, the intersection of Charles and Boylston Streets (two blocks north of his birthplace) was designated "Edgar Allan Poe Square". In March 2014, fundraising was completed for construction of a permanent memorial sculpture, known as Poe Returning to Boston, at this location. The winning design by Stefanie Rocknak depicts a life-sized Poe striding against the wind, accompanied by a flying raven; his suitcase lid has fallen open, leaving a "paper trail" of literary works embedded in the sidewalk behind him. The public unveiling on October 5, 2014, was attended by former U.S. poet laureate Robert Pinsky. Other Poe landmarks include a building on the Upper West Side, where Poe temporarily lived when he first moved to New York City. A plaque suggests that Poe wrote "The Raven" here. On Sullivan's Island in Charleston County, South Carolina, the setting of Poe's tale "The Gold-Bug" and where Poe served in the Army in 1827 at Fort Moultrie, there is a restaurant called Poe's Tavern. In the Fell's Point section of Baltimore, a bar still stands where legend says that Poe was last seen drinking before his death. Known as "The Horse You Came in On", local lore insists that a ghost whom they call "Edgar" haunts the rooms above. Early daguerreotypes of Poe continue to arouse great interest among literary historians. Notable among them are: Between 1949 and 2009, a bottle of cognac and three roses were left at Poe's original grave marker every January 19 by an unknown visitor affectionately referred to as the "Poe Toaster". Sam Porpora was a historian at the Westminster Church in Baltimore, where Poe is buried; he claimed on August 15, 2007, that he had started the tradition in 1949. Porpora said that the tradition began in order to raise money and enhance the profile of the church. His story has not been confirmed, and some details which he gave to the press are factually inaccurate. The Poe Toaster's last appearance was on January 19, 2009, the day of Poe's bicentennial. Short stories Poetry Other works
[ { "paragraph_id": 0, "text": "Edgar Allan Poe (né Edgar Poe; January 19, 1809 – October 7, 1849) was an American writer, poet, author, editor, and literary critic who is best known for his poetry and short stories, particularly his tales of mystery and the macabre. He is widely regarded as a central figure of Romanticism and Gothic fiction in the United States, and of American literature. Poe was one of the country's earliest practitioners of the short story, and is considered the inventor of the detective fiction genre, as well as a significant contributor to the emerging genre of science fiction. He is the first well-known American writer to earn a living through writing alone, resulting in a financially difficult life and career.", "title": "" }, { "paragraph_id": 1, "text": "Poe was born in Boston, the second child of actors David and Elizabeth \"Eliza\" Poe. His father abandoned the family in 1810, and when his mother died the following year, Poe was taken in by John and Frances Allan of Richmond, Virginia. They never formally adopted him, but he was with them well into young adulthood. He attended the University of Virginia but left after a year due to lack of money. He quarreled with John Allan over the funds for his education, and his gambling debts. In 1827, having enlisted in the United States Army under an assumed name, he published his first collection, Tamerlane and Other Poems, credited only to \"a Bostonian\". Poe and Allan reached a temporary rapprochement after the death of Allan's wife in 1829. Poe later failed as an officer cadet at West Point, declared a firm wish to be a poet and writer, and parted ways with Allan.", "title": "" }, { "paragraph_id": 2, "text": "Poe switched his focus to prose, and spent the next several years working for literary journals and periodicals, becoming known for his own style of literary criticism. His work forced him to move between several cities, including Baltimore, Philadelphia, and New York City. In 1836, he married his 13-year-old cousin, Virginia Clemm, but she died of tuberculosis in 1847. In January 1845, he published his poem \"The Raven\" to instant success. He planned for years to produce his own journal The Penn, later renamed The Stylus. But before it began publishing, Poe died in Baltimore in 1849, aged 40, under mysterious circumstances. The cause of his death remains unknown, and has been variously attributed to many causes including disease, alcoholism, substance abuse, and suicide.", "title": "" }, { "paragraph_id": 3, "text": "Poe and his works influenced literature around the world, as well as specialized fields such as cosmology and cryptography. He and his work appear throughout popular culture in literature, music, films, and television. A number of his homes are dedicated museums. The Mystery Writers of America present an annual Edgar Award for distinguished work in the mystery genre.", "title": "" }, { "paragraph_id": 4, "text": "Edgar Poe was born in Boston, Massachusetts, on January 19, 1809, the second child of American actor David Poe Jr. and English-born actress Elizabeth Arnold Hopkins Poe. He had an elder brother, Henry, and a younger sister, Rosalie. Their grandfather, David Poe, had emigrated from County Cavan, Ireland, around 1750.", "title": "Early life and education" }, { "paragraph_id": 5, "text": "His father abandoned the family in 1810, and his mother died a year later from pulmonary tuberculosis. Poe was then taken into the home of John Allan, a successful merchant in Richmond, Virginia, who dealt in a variety of goods, including cloth, wheat, tombstones, tobacco, and slaves. The Allans served as a foster family and gave him the name \"Edgar Allan Poe\", although they never formally adopted him.", "title": "Early life and education" }, { "paragraph_id": 6, "text": "The Allan family had Poe baptized into the Episcopal Church in 1812. John Allan alternately spoiled and aggressively disciplined his foster son. The family sailed to the United Kingdom in 1815, and Poe attended the grammar school for a short period in Irvine, Ayrshire, Scotland, where Allan was born, before rejoining the family in London in 1816. There he studied at a boarding school in Chelsea until summer 1817. He was subsequently entered at the Reverend John Bransby's Manor House School at Stoke Newington, then a suburb 4 miles (6 km) north of London.", "title": "Early life and education" }, { "paragraph_id": 7, "text": "Poe moved with the Allans back to Richmond in 1820. In 1824, he served as the lieutenant of the Richmond youth honor guard as the city celebrated the visit of the Marquis de Lafayette. In March 1825, Allan's uncle and business benefactor William Galt died, who was said to be one of the wealthiest men in Richmond, leaving Allan several acres of real estate. The inheritance was estimated at $750,000 (equivalent to $19,000,000 in 2022). By summer 1825, Allan celebrated his expansive wealth by purchasing a two-story brick house called Moldavia.", "title": "Early life and education" }, { "paragraph_id": 8, "text": "Poe may have become engaged to Sarah Elmira Royster before he registered at the University of Virginia in February 1826 to study ancient and modern languages. The university was in its infancy, established on the ideals of its founder Thomas Jefferson. It had strict rules against gambling, horses, guns, tobacco, and alcohol, but these rules were mostly ignored. Jefferson enacted a system of student self-government, allowing students to choose their own studies, make their own arrangements for boarding, and report all wrongdoing to the faculty. The unique system was still in chaos, and there was a high dropout rate. During his time there, Poe lost touch with Royster and also became estranged from his foster father over gambling debts. He claimed that Allan had not given him sufficient money to register for classes, purchase texts, and procure and furnish a dormitory. Allan did send additional money and clothes, but Poe's debts increased. Poe gave up on the university after a year but did not feel welcome returning to Richmond, especially when he learned that his sweetheart Royster had married another man, Alexander Shelton. He traveled to Boston in April 1827, sustaining himself with odd jobs as a clerk and newspaper writer, and started using the pseudonym Henri Le Rennet during this period.", "title": "Early life and education" }, { "paragraph_id": 9, "text": "Poe was unable to support himself, so he enlisted in the United States Army as a private on May 27, 1827, using the name \"Edgar A. Perry\". He claimed that he was 22 years old even though he was 18. He first served at Fort Independence in Boston Harbor for five dollars a month. That year, he released his first book, a 40-page collection of poetry titled Tamerlane and Other Poems, attributed with the byline \"by a Bostonian\". Only 50 copies were printed, and the book received virtually no attention. Poe's regiment was posted to Fort Moultrie in Charleston, South Carolina, and traveled by ship on the brig Waltham on November 8, 1827. Poe was promoted to \"artificer\", an enlisted tradesman who prepared shells for artillery, and had his monthly pay doubled. He served for two years and attained the rank of Sergeant Major for Artillery, the highest rank that a non-commissioned officer could achieve; he then sought to end his five-year enlistment early. He revealed his real name and his circumstances to his commanding officer, Lieutenant Howard, who would allow Poe to be discharged only if he reconciled with Allan. Poe wrote a letter to Allan, who was unsympathetic and spent several months ignoring Poe's pleas; Allan may not have written to Poe even to make him aware of his foster mother's illness. Frances Allan died on February 28, 1829, and Poe visited the day after her burial. Perhaps softened by his wife's death, Allan agreed to support Poe's attempt to be discharged in order to receive an appointment to the United States Military Academy at West Point, New York.", "title": "Military career" }, { "paragraph_id": 10, "text": "Poe was finally discharged on April 15, 1829, after securing a replacement to finish his enlisted term for him. Before entering West Point, he moved to Baltimore for a time to stay with his widowed aunt Maria Clemm, her daughter Virginia Eliza Clemm (Poe's first cousin), his brother Henry, and his invalid grandmother Elizabeth Cairnes Poe. In September of that year, Poe received \"the very first words of encouragement I ever remember to have heard\" in a review of his poetry by influential critic John Neal, prompting Poe to dedicate one of the poems to Neal in his second book Al Aaraaf, Tamerlane and Minor Poems, published in Baltimore in 1829.", "title": "Military career" }, { "paragraph_id": 11, "text": "Poe traveled to West Point and matriculated as a cadet on July 1, 1830. In October 1830, Allan married his second wife Louisa Patterson. The marriage and bitter quarrels with Poe over the children born to Allan out of extramarital affairs led to the foster father finally disowning Poe. Poe decided to leave West Point by purposely getting court-martialed. On February 8, 1831, he was tried for gross neglect of duty and disobedience of orders for refusing to attend formations, classes, or church. He tactically pleaded not guilty to induce dismissal, knowing that he would be found guilty.", "title": "Military career" }, { "paragraph_id": 12, "text": "Poe left for New York in February 1831 and released a third volume of poems, simply titled Poems. The book was financed with help from his fellow cadets at West Point, many of whom donated 75 cents to the cause, raising a total of $170. They may have been expecting verses similar to the satirical ones Poe had written about commanding officers. It was printed by Elam Bliss of New York, labeled as \"Second Edition\", and including a page saying, \"To the U.S. Corps of Cadets this volume is respectfully dedicated\". The book once again reprinted the long poems \"Tamerlane\" and \"Al Aaraaf\" but also six previously unpublished poems, including early versions of \"To Helen\", \"Israfel\", and \"The City in the Sea\". Poe returned to Baltimore to his aunt, brother, and cousin in March 1831. His elder brother Henry had been in ill health, in part due to problems with alcoholism, and he died on August 1, 1831.", "title": "Military career" }, { "paragraph_id": 13, "text": "After his brother's death, Poe began more earnest attempts to start his career as a writer, but he chose a difficult time in American publishing to do so. He was one of the first Americans to live by writing alone and was hampered by the lack of an international copyright law. American publishers often produced unauthorized copies of British works rather than paying for new work by Americans. The industry was also particularly hurt by the Panic of 1837. There was a booming growth in American periodicals around this time, fueled in part by new technology, but many did not last beyond a few issues. Publishers often refused to pay their writers or paid them much later than they promised, and Poe repeatedly resorted to humiliating pleas for money and other assistance.", "title": "Publishing career" }, { "paragraph_id": 14, "text": "After his early attempts at poetry, Poe had turned his attention to prose, likely based on John Neal's critiques in The Yankee magazine. He placed a few stories with a Philadelphia publication and began work on his only drama Politian. The Baltimore Saturday Visiter awarded him a prize in October 1833 for his short story \"MS. Found in a Bottle\". The story brought him to the attention of John P. Kennedy, a Baltimorean of considerable means who helped Poe place some of his stories and introduced him to Thomas W. White, editor of the Southern Literary Messenger in Richmond.", "title": "Publishing career" }, { "paragraph_id": 15, "text": "In 1835, Poe became assistant editor of the 'Southern Literary Messenger, but White discharged him within a few weeks for being drunk on the job. Poe returned to Baltimore, where he obtained a license to marry his cousin Virginia on September 22, 1835, though it is unknown if they were married at that time. He was 26 and she was 13.", "title": "Publishing career" }, { "paragraph_id": 16, "text": "Poe was reinstated by White after promising good behavior, and he returned to Richmond with Virginia and her mother. He remained at the Messenger until January 1837. During this period, Poe claimed that its circulation increased from 700 to 3,500. He published several poems, book reviews, critiques, and stories in the paper. On May 16, 1836, he and Virginia held a Presbyterian wedding ceremony performed by Amasa Converse at their Richmond boarding house, with a witness falsely attesting Clemm's age as 21.", "title": "Publishing career" }, { "paragraph_id": 17, "text": "In 1838, Poe relocated to Philadelphia, where he lived at four different residences between 1838 and 1844, one of which at 532 N. 7th Street has been preserved as a National Historic Landmark.", "title": "Publishing career" }, { "paragraph_id": 18, "text": "That same year, Poe's novel The Narrative of Arthur Gordon Pym of Nantucket was published and widely reviewed. In the summer of 1839, he became assistant editor of Burton's Gentleman's Magazine. He published numerous articles, stories, and reviews, enhancing his reputation as a trenchant critic which he had established at the Messenger. Also in 1839, the collection Tales of the Grotesque and Arabesque was published in two volumes, though he made little money from it and it received mixed reviews.", "title": "Publishing career" }, { "paragraph_id": 19, "text": "In June 1840, Poe published a prospectus announcing his intentions to start his own journal called The Stylus, although he originally intended to call it The Penn, since it would have been based in Philadelphia. He bought advertising space for his prospectus in the June 6, 1840, issue of Philadelphia's Saturday Evening Post: \"Prospectus of the Penn Magazine, a Monthly Literary journal to be edited and published in the city of Philadelphia by Edgar A. Poe.\" The journal was never produced before Poe's death.", "title": "Publishing career" }, { "paragraph_id": 20, "text": "Poe left Burton's after about a year and found a position as writer and co-editor at Graham's Magazine, a successful monthly publication. In the last number of Graham's for 1841, Poe was among the co-signatories to an editorial note of celebration of the tremendous success the magazine had achieved in the past year: \"Perhaps the editors of no magazine, either in America or in Europe, ever sat down, at the close of a year, to contemplate the progress of their work with more satisfaction than we do now. Our success has been unexampled, almost incredible. We may assert without fear of contradiction that no periodical ever witnessed the same increase during so short a period.\"", "title": "Publishing career" }, { "paragraph_id": 21, "text": "Around this time, Poe attempted to secure a position in the administration of John Tyler, claiming that he was a member of the Whig Party. He hoped to be appointed to the United States Custom House in Philadelphia with help from President Tyler's son Robert, an acquaintance of Poe's friend Frederick Thomas. Poe failed to show up for a meeting with Thomas to discuss the appointment in mid-September 1842, claiming to have been sick, though Thomas believed that he had been drunk. Poe was promised an appointment, but all positions were filled by others.", "title": "Publishing career" }, { "paragraph_id": 22, "text": "One evening in January 1842, Virginia showed the first signs of consumption, or tuberculosis, while singing and playing the piano, which Poe described as breaking a blood vessel in her throat. She only partially recovered, and Poe began to drink more heavily under the stress of her illness. He left Graham's and attempted to find a new position, for a time angling for a government post. He returned to New York where he worked briefly at the Evening Mirror before becoming editor of the Broadway Journal, and later its owner. There Poe alienated himself from other writers by publicly accusing Henry Wadsworth Longfellow of plagiarism, though Longfellow never responded. On January 29, 1845, Poe's poem \"The Raven\" appeared in the Evening Mirror and became a popular sensation. It made Poe a household name almost instantly, though he was paid only $9 for its publication. It was concurrently published in The American Review: A Whig Journal under the pseudonym \"Quarles\".", "title": "Publishing career" }, { "paragraph_id": 23, "text": "The Broadway Journal failed in 1846, and Poe moved to a cottage in Fordham, New York, in the Bronx. That home, now known as the Edgar Allan Poe Cottage, was relocated in later years to a park near the southeast corner of the Grand Concourse and Kingsbridge Road. Nearby, Poe befriended the Jesuits at St. John's College, now Fordham University. Virginia died at the cottage on January 30, 1847. Biographers and critics often suggest that Poe's frequent theme of the \"death of a beautiful woman\" stems from the repeated loss of women throughout his life, including his wife.", "title": "Publishing career" }, { "paragraph_id": 24, "text": "Poe was increasingly unstable after his wife's death. He attempted to court poet Sarah Helen Whitman, who lived in Providence, Rhode Island. Their engagement failed, purportedly because of Poe's drinking and erratic behavior. There is also strong evidence that Whitman's mother intervened and did much to derail the relationship. Poe then returned to Richmond and resumed a relationship with his childhood sweetheart Sarah Elmira Royster.", "title": "Publishing career" }, { "paragraph_id": 25, "text": "On October 3, 1849, Poe was found semiconscious in Baltimore, \"in great distress, and... in need of immediate assistance\", according to Joseph W. Walker, who found him. He was taken to the Washington Medical College, where he died on Sunday, October 7, 1849, at 5:00 in the morning. Poe was not coherent long enough to explain how he came to be in his dire condition and why he was wearing clothes that were not his own. He is said to have repeatedly called out the name \"Reynolds\" on the night before his death, though it is unclear to whom he was referring. His attending physician said that Poe's final words were, \"Lord help my poor soul\". All of the relevant medical records have been lost, including Poe's death certificate.", "title": "Death" }, { "paragraph_id": 26, "text": "Newspapers at the time reported Poe's death as \"congestion of the brain\" or \"cerebral inflammation\", common euphemisms for death from disreputable causes such as alcoholism. The actual cause of death remains a mystery. Speculation has included delirium tremens, heart disease, epilepsy, syphilis, meningeal inflammation, cholera, carbon monoxide poisoning, and rabies. One theory dating from 1872 suggests that Poe's death resulted from cooping, a form of electoral fraud in which citizens were forced to vote for a particular candidate, sometimes leading to violence and even murder.", "title": "Death" }, { "paragraph_id": 27, "text": "Immediately after Poe's death, his literary rival Rufus Wilmot Griswold wrote a slanted high-profile obituary under a pseudonym, filled with falsehoods that cast Poe as a lunatic, and which described him as a person who \"walked the streets, in madness or melancholy, with lips moving in indistinct curses, or with eyes upturned in passionate prayers, (never for himself, for he felt, or professed to feel, that he was already damned)\".", "title": "Death" }, { "paragraph_id": 28, "text": "The long obituary appeared in the New York Tribune, signed \"Ludwig\" on the day that Poe was buried in Baltimore. It was further published throughout the country. The obituary began, \"Edgar Allan Poe is dead. He died in Baltimore the day before yesterday. This announcement will startle many, but few will be grieved by it.\" \"Ludwig\" was soon identified as Griswold, an editor, critic, and anthologist who had borne a grudge against Poe since 1842. Griswold somehow became Poe's literary executor and attempted to destroy his enemy's reputation after his death.", "title": "Death" }, { "paragraph_id": 29, "text": "Griswold wrote a biographical article of Poe called \"Memoir of the Author\", which he included in an 1850 volume of the collected works. There he depicted Poe as a depraved, drunken, drug-addled madman and included Poe's letters as evidence. Many of his claims were either lies or distortions; for example, it is seriously disputed that Poe was a drug addict. Griswold's book was denounced by those who knew Poe well, including John Neal, who published an article defending Poe and attacking Griswold as a \"Rhadamanthus, who is not to be bilked of his fee, a thimble-full of newspaper notoriety\". Griswold's book nevertheless became a popularly accepted biographical source. This was in part because it was the only full biography available and was widely reprinted, and in part because readers thrilled at the thought of reading works by an \"evil\" man. Letters that Griswold presented as proof were later revealed as forgeries.", "title": "Death" }, { "paragraph_id": 30, "text": "Poe's best-known fiction works are Gothic horror, adhering to the genre's conventions to appeal to the public taste. His most recurring themes deal with questions of death, including its physical signs, the effects of decomposition, concerns of premature burial, the reanimation of the dead, and mourning. Many of his works are generally considered part of the dark romanticism genre, a literary reaction to transcendentalism which Poe strongly disliked. He referred to followers of the transcendental movement as \"Frog-Pondians\", after the pond on Boston Common, and ridiculed their writings as \"metaphor—run mad,\" lapsing into \"obscurity for obscurity's sake\" or \"mysticism for mysticism's sake\". Poe once wrote in a letter to Thomas Holley Chivers that he did not dislike transcendentalists, \"only the pretenders and sophists among them\".", "title": "Literary style and themes" }, { "paragraph_id": 31, "text": "Beyond horror, Poe also wrote satires, humor tales, and hoaxes. For comic effect, he used irony and ludicrous extravagance, often in an attempt to liberate the reader from cultural conformity. \"Metzengerstein\" is the first story that Poe is known to have published and his first foray into horror, but it was originally intended as a burlesque satirizing the popular genre. Poe also reinvented science fiction, responding in his writing to emerging technologies such as hot air balloons in \"The Balloon-Hoax\".", "title": "Literary style and themes" }, { "paragraph_id": 32, "text": "Poe wrote much of his work using themes aimed specifically at mass-market tastes. To that end, his fiction often included elements of popular pseudosciences, such as phrenology and physiognomy.", "title": "Literary style and themes" }, { "paragraph_id": 33, "text": "Poe's writing reflects his literary theories, which he presented in his criticism and also in essays such as \"The Poetic Principle\". He disliked didacticism and allegory, though he believed that meaning in literature should be an undercurrent just beneath the surface. Works with obvious meanings, he wrote, cease to be art. He believed that work of quality should be brief and focus on a specific single effect. To that end, he believed that the writer should carefully calculate every sentiment and idea.", "title": "Literary style and themes" }, { "paragraph_id": 34, "text": "Poe describes his method in writing \"The Raven\" in the essay \"The Philosophy of Composition\", and he claims to have strictly followed this method. It has been questioned whether he really followed this system, however. T. S. Eliot said: \"It is difficult for us to read that essay without reflecting that if Poe plotted out his poem with such calculation, he might have taken a little more pains over it: the result hardly does credit to the method.\" Biographer Joseph Wood Krutch described the essay as \"a rather highly ingenious exercise in the art of rationalization\".", "title": "Literary style and themes" }, { "paragraph_id": 35, "text": "During his lifetime, Poe was mostly recognized as a literary critic. Fellow critic James Russell Lowell called him \"the most discriminating, philosophical, and fearless critic upon imaginative works who has written in America\", suggesting—rhetorically—that he occasionally used prussic acid instead of ink. Poe's caustic reviews earned him the reputation of being a \"tomahawk man\". A favorite target of Poe's criticism was Boston's acclaimed poet Henry Wadsworth Longfellow, who was often defended by his literary friends in what was later called \"The Longfellow War\". Poe accused Longfellow of \"the heresy of the didactic\", writing poetry that was preachy, derivative, and thematically plagiarized. Poe correctly predicted that Longfellow's reputation and style of poetry would decline, concluding, \"We grant him high qualities, but deny him the Future\".", "title": "Legacy" }, { "paragraph_id": 36, "text": "Poe was also known as a writer of fiction and became one of the first American authors of the 19th century to become more popular in Europe than in the United States. Poe is particularly respected in France, in part due to early translations by Charles Baudelaire. Baudelaire's translations became definitive renditions of Poe's work in Continental Europe.", "title": "Legacy" }, { "paragraph_id": 37, "text": "Poe's early detective fiction tales featuring C. Auguste Dupin laid the groundwork for future detectives in literature. Sir Arthur Conan Doyle said, \"Each [of Poe's detective stories] is a root from which a whole literature has developed.... Where was the detective story until Poe breathed the breath of life into it?\" The Mystery Writers of America have named their awards for excellence in the genre the \"Edgars\". Poe's work also influenced science fiction, notably Jules Verne, who wrote a sequel to Poe's novel The Narrative of Arthur Gordon Pym of Nantucket called An Antarctic Mystery, also known as The Sphinx of the Ice Fields. Science fiction author H. G. Wells noted, \"Pym tells what a very intelligent mind could imagine about the south polar region a century ago\". In 2013, The Guardian cited Pym as one of the greatest novels ever written in the English language, and noted its influence on later authors such as Doyle, Henry James, B. Traven, and David Morrell.", "title": "Legacy" }, { "paragraph_id": 38, "text": "Horror author and historian H. P. Lovecraft was heavily influenced by Poe's horror tales, dedicating an entire section of his long essay, \"Supernatural Horror in Literature\", to his influence on the genre. In his letters, Lovecraft described Poe as his \"God of Fiction\". Lovecraft's earlier stories express a significant influence from Poe. A later work, At the Mountains of Madness, quotes him and was influenced by The Narrative of Arthur Gordon Pym of Nantucket. Lovecraft also made extensive use of Poe's unity of effect in his fiction. Alfred Hitchcock once said, \"It's because I liked Edgar Allan Poe's stories so much that I began to make suspense films\". Many references to Poe's works are present in Vladimir Nabokov's novels.", "title": "Legacy" }, { "paragraph_id": 39, "text": "Like many famous artists, Poe's works have spawned imitators. One trend among imitators of Poe has been claims by clairvoyants or psychics to be \"channeling\" poems from Poe's spirit. One of the most notable of these was Lizzie Doten, who published Poems from the Inner Life in 1863, in which she claimed to have \"received\" new compositions by Poe's spirit. The compositions were re-workings of famous Poe poems such as \"The Bells\", but which reflected a new, positive outlook.", "title": "Legacy" }, { "paragraph_id": 40, "text": "Poe has also received criticism. This is partly because of the negative perception of his personal character and its influence upon his reputation. William Butler Yeats was occasionally critical of Poe and once called him \"vulgar\". Transcendentalist Ralph Waldo Emerson reacted to \"The Raven\" by saying, \"I see nothing in it\", and derisively referred to Poe as \"the jingle man\". Aldous Huxley wrote that Poe's writing \"falls into vulgarity\" by being \"too poetical\"—the equivalent of wearing a diamond ring on every finger.", "title": "Legacy" }, { "paragraph_id": 41, "text": "It is believed that only twelve copies have survived of Poe's first book Tamerlane and Other Poems. In December 2009, one copy sold at Christie's auctioneers in New York City for $662,500, a record price paid for a work of American literature.", "title": "Legacy" }, { "paragraph_id": 42, "text": "Eureka: A Prose Poem, an essay written in 1848, included a cosmological theory that presaged the Big Bang theory by 80 years, as well as the first plausible solution to Olbers' paradox. Poe eschewed the scientific method in Eureka and instead wrote from pure intuition. For this reason, he considered it a work of art, not science, but insisted that it was still true and considered it to be his career masterpiece. Even so, Eureka is full of scientific errors. In particular, Poe's suggestions ignored Newtonian principles regarding the density and rotation of planets.", "title": "Legacy" }, { "paragraph_id": 43, "text": "Poe had a keen interest in cryptography. He had placed a notice of his abilities in the Philadelphia paper Alexander's Weekly (Express) Messenger, inviting submissions of ciphers which he proceeded to solve. In July 1841, Poe had published an essay called \"A Few Words on Secret Writing\" in Graham's Magazine. Capitalizing on public interest in the topic, he wrote \"The Gold-Bug\" incorporating ciphers as an essential part of the story. Poe's success with cryptography relied not so much on his deep knowledge of that field (his method was limited to the simple substitution cryptogram) as on his knowledge of the magazine and newspaper culture. His keen analytical abilities, which were so evident in his detective stories, allowed him to see that the general public was largely ignorant of the methods by which a simple substitution cryptogram can be solved, and he used this to his advantage. The sensation that Poe created with his cryptography stunts played a major role in popularizing cryptograms in newspapers and magazines.", "title": "Legacy" }, { "paragraph_id": 44, "text": "Two ciphers he published in 1841 under the name \"W. B. Tyler\" were not solved until 1992 and 2000 respectively. One was a quote from Joseph Addison's play Cato; the other is probably based on a poem by Hester Thrale.", "title": "Legacy" }, { "paragraph_id": 45, "text": "Poe had an influence on cryptography beyond increasing public interest during his lifetime. William Friedman, America's foremost cryptologist, was heavily influenced by Poe. Friedman's initial interest in cryptography came from reading \"The Gold-Bug\" as a child, an interest that he later put to use in deciphering Japan's PURPLE code during World War II.", "title": "Legacy" }, { "paragraph_id": 46, "text": "The historical Edgar Allan Poe has appeared as a fictionalized character, often in order to represent the \"mad genius\" or \"tormented artist\" and in order to exploit his personal struggles. Many such depictions also blend in with characters from his stories, suggesting that Poe and his characters share identities. Often, fictional depictions of Poe use his mystery-solving skills in such novels as The Poe Shadow by Matthew Pearl.", "title": "In popular culture" }, { "paragraph_id": 47, "text": "", "title": "In popular culture" }, { "paragraph_id": 48, "text": "No childhood home of Poe is still standing, including the Allan family's Moldavia estate. The oldest standing home in Richmond, the Old Stone House, is in use as the Edgar Allan Poe Museum, though Poe never lived there. The collection includes many items that Poe used during his time with the Allan family, and also features several rare first printings of Poe works. 13 West Range is the dorm room that Poe is believed to have used while studying at the University of Virginia in 1826; it is preserved and available for visits. Its upkeep is overseen by a group of students and staff known as the Raven Society.", "title": "In popular culture" }, { "paragraph_id": 49, "text": "The earliest surviving home in which Poe lived is at 203 North Amity St. in Baltimore, which is preserved as the Edgar Allan Poe House and Museum. Poe is believed to have lived in the home at the age of 23 when he first lived with Maria Clemm and Virginia and possibly his grandmother and possibly his brother William Henry Leonard Poe. It is open to the public and is also the home of the Edgar Allan Poe Society.", "title": "In popular culture" }, { "paragraph_id": 50, "text": "While in Philadelphia between 1838 and 1844, Poe lived at at least four different residences, including the Indian Queen Hotel at 15 S. 4th Street, at a residence at 16th and Locust Streets, at 2502 Fairmount Street, and then in the Spring Garden section of the city at 532 N. 7th Street, a residence that has been preserved by the National Park Service as the Edgar Allan Poe National Historic Site. Poe's final home in Bronx, New York City, is preserved as the Edgar Allan Poe Cottage.", "title": "In popular culture" }, { "paragraph_id": 51, "text": "In Boston, a commemorative plaque on Boylston Street is several blocks away from the actual location of Poe's birth. The house which was his birthplace at 62 Carver Street no longer exists; also, the street has since been renamed \"Charles Street South\". A \"square\" at the intersection of Broadway, Fayette, and Carver Streets had once been named in his honor, but it disappeared when the streets were rearranged. In 2009, the intersection of Charles and Boylston Streets (two blocks north of his birthplace) was designated \"Edgar Allan Poe Square\".", "title": "In popular culture" }, { "paragraph_id": 52, "text": "In March 2014, fundraising was completed for construction of a permanent memorial sculpture, known as Poe Returning to Boston, at this location. The winning design by Stefanie Rocknak depicts a life-sized Poe striding against the wind, accompanied by a flying raven; his suitcase lid has fallen open, leaving a \"paper trail\" of literary works embedded in the sidewalk behind him. The public unveiling on October 5, 2014, was attended by former U.S. poet laureate Robert Pinsky.", "title": "In popular culture" }, { "paragraph_id": 53, "text": "Other Poe landmarks include a building on the Upper West Side, where Poe temporarily lived when he first moved to New York City. A plaque suggests that Poe wrote \"The Raven\" here. On Sullivan's Island in Charleston County, South Carolina, the setting of Poe's tale \"The Gold-Bug\" and where Poe served in the Army in 1827 at Fort Moultrie, there is a restaurant called Poe's Tavern. In the Fell's Point section of Baltimore, a bar still stands where legend says that Poe was last seen drinking before his death. Known as \"The Horse You Came in On\", local lore insists that a ghost whom they call \"Edgar\" haunts the rooms above.", "title": "In popular culture" }, { "paragraph_id": 54, "text": "Early daguerreotypes of Poe continue to arouse great interest among literary historians. Notable among them are:", "title": "In popular culture" }, { "paragraph_id": 55, "text": "Between 1949 and 2009, a bottle of cognac and three roses were left at Poe's original grave marker every January 19 by an unknown visitor affectionately referred to as the \"Poe Toaster\". Sam Porpora was a historian at the Westminster Church in Baltimore, where Poe is buried; he claimed on August 15, 2007, that he had started the tradition in 1949. Porpora said that the tradition began in order to raise money and enhance the profile of the church. His story has not been confirmed, and some details which he gave to the press are factually inaccurate. The Poe Toaster's last appearance was on January 19, 2009, the day of Poe's bicentennial.", "title": "In popular culture" }, { "paragraph_id": 56, "text": "Short stories", "title": "List of selected works" }, { "paragraph_id": 57, "text": "Poetry", "title": "List of selected works" }, { "paragraph_id": 58, "text": "Other works", "title": "List of selected works" } ]
Edgar Allan Poe was an American writer, poet, author, editor, and literary critic who is best known for his poetry and short stories, particularly his tales of mystery and the macabre. He is widely regarded as a central figure of Romanticism and Gothic fiction in the United States, and of American literature. Poe was one of the country's earliest practitioners of the short story, and is considered the inventor of the detective fiction genre, as well as a significant contributor to the emerging genre of science fiction. He is the first well-known American writer to earn a living through writing alone, resulting in a financially difficult life and career. Poe was born in Boston, the second child of actors David and Elizabeth "Eliza" Poe. His father abandoned the family in 1810, and when his mother died the following year, Poe was taken in by John and Frances Allan of Richmond, Virginia. They never formally adopted him, but he was with them well into young adulthood. He attended the University of Virginia but left after a year due to lack of money. He quarreled with John Allan over the funds for his education, and his gambling debts. In 1827, having enlisted in the United States Army under an assumed name, he published his first collection, Tamerlane and Other Poems, credited only to "a Bostonian". Poe and Allan reached a temporary rapprochement after the death of Allan's wife in 1829. Poe later failed as an officer cadet at West Point, declared a firm wish to be a poet and writer, and parted ways with Allan. Poe switched his focus to prose, and spent the next several years working for literary journals and periodicals, becoming known for his own style of literary criticism. His work forced him to move between several cities, including Baltimore, Philadelphia, and New York City. In 1836, he married his 13-year-old cousin, Virginia Clemm, but she died of tuberculosis in 1847. In January 1845, he published his poem "The Raven" to instant success. He planned for years to produce his own journal The Penn, later renamed The Stylus. But before it began publishing, Poe died in Baltimore in 1849, aged 40, under mysterious circumstances. The cause of his death remains unknown, and has been variously attributed to many causes including disease, alcoholism, substance abuse, and suicide. Poe and his works influenced literature around the world, as well as specialized fields such as cosmology and cryptography. He and his work appear throughout popular culture in literature, music, films, and television. A number of his homes are dedicated museums. The Mystery Writers of America present an annual Edgar Award for distinguished work in the mystery genre.
2001-09-26T23:43:21Z
2023-12-28T23:50:18Z
[ "Template:StandardEbooks", "Template:OL author", "Template:Main", "Template:Cite web", "Template:Né", "Template:Sfn", "Template:Cite book", "Template:ISBN", "Template:Library resources box", "Template:Gutenberg author", "Template:Redirect2", "Template:Use mdy dates", "Template:Use American English", "Template:Sfnm", "Template:Anchor", "Template:Harvnb", "Template:Sister project links", "Template:ISFDB name", "Template:Short description", "Template:Pp-semi-indef", "Template:Webarchive", "Template:Spoken Wikipedia", "Template:Nowrap", "Template:Div col", "Template:Infobox writer", "Template:Convert", "Template:Cite magazine", "Template:Refend", "Template:Pp-move", "Template:Featured article", "Template:Cite journal", "Template:Cite news", "Template:Citation", "Template:Edgar Allan Poe", "Template:Navboxes", "Template:PoeTopics", "Template:Inflation", "Template:Refbegin", "Template:LCAuth", "Template:Div col end", "Template:Portal", "Template:Internet Archive author", "Template:Librivox author", "Template:Authority control", "Template:Inflation/fn", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Edgar_Allan_Poe
9,550
Electricity
Electricity is the set of physical phenomena associated with the presence and motion of matter possessing an electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others. The presence of either a positive or negative electric charge produces an electric field. The motion of electric charges is an electric current and produces a magnetic field. In most applications, Coulomb's law determines the force acting on an electric charge. Electric potential is the work done to move an electric charge from one point to another within an electric field, typically measured in volts. Electricity plays a central role in many modern technologies, serving in electric power where electric current is used to energise equipment, and in electronics dealing with electrical circuits involving active components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies. The study of electrical phenomena dates back to antiquity, with theoretical understanding progressing slowly until the 17th and 18th centuries. The development of the theory of electromagnetism in the 19th century marked significant progress, leading to electricity's industrial and residential application by electrical engineers by the century's end. This rapid expansion in electrical technology at the time was the driving force for the Second Industrial Revolution, with electricity's versatility driving transformations in industry and society. Electricity is integral to applications spanning transport, heating, lighting, communications, and computation, making it the foundation of modern industrial society. Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the "Thunderer of the Nile", and described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature. Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from ἤλεκτρον, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges. In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862. While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life. In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels. The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor. Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948. The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity. A lightweight ball suspended by a fine thread can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract. The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them. The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 10 times that of the gravitational attraction pulling them together. Charge originates from certain types of subatomic particles, the most familiar carriers of which are the electron and proton. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire. The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other. The charge on electrons and protons is opposite in sign, hence an amount of charge may be expressed as being either negative or positive. By convention, the charge carried by electrons is deemed negative, and that by protons positive, a custom that originated with the work of Benjamin Franklin. The amount of charge is usually given the symbol Q and expressed in coulombs; each electron carries the same charge of approximately −1.6022×10 coulomb. The proton has a charge that is equal and opposite, and thus +1.6022×10 coulomb. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle. Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer. The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator. By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation. The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires. Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840. One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment. In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised. The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker. An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, it follows that an electric field is a vector field. The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves. A hollow conducting body carries all its charge on its outer surface. The field is therefore 0 at all places inside the body. This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects. The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh. The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning strike to develop there, rather than to the building it serves to protect. The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity. This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated. The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage. For practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable. Electric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, since otherwise there would be a force along the surface of the conductor that would move the charge carriers to even the potential across the surface. The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together. Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's words were that "the electric conflict acts in a revolving manner." The force also depended on the direction of the current, for if the flow was reversed, then the force did too. Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere. This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained. Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work. An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task. The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli. The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp. The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it. The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one. Electric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second. Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean "electric power in watts." The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is where Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency. Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes digital switching possible, and electronics is widely used in information processing, telecommunications, and signal processing. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system. Today, most electronic devices use semiconductor components to perform electron control. The underlying principles that explain how semiconductors work are studied in solid state physics, whereas the design and construction of electronic circuits to solve practical problems are part of electronics engineering. Faraday's and Ampère's work showed that a time-varying magnetic field created an electric field, and a time-varying electric field created a magnetic field. Thus, when either field is changing in time, a field of the other is always induced. These variations are an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that in a vacuum such a wave would travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's equations, which unify light, fields, and charge are one of the great milestones of theoretical physics. The work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents and, via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances. In the 6th century BC the Greek philosopher Thales of Miletus experimented with amber rods: these were the first studies into the production of electricity. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electricity. Electrical power is usually generated by electro-mechanical generators. These can be driven by steam produced from fossil fuel combustion or the heat released from nuclear reactions, but also more directly from the kinetic energy of wind or flowing water. The steam turbine invented by Sir Charles Parsons in 1884 is still used to convert the thermal energy of steam into a rotary motion that can be used by electro-mechanical generators. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. Electricity generated by solar panels rely on a different mechanism: solar radiation is converted directly into electricity using the photovoltaic effect. Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century, a rate of growth that is now being experienced by emerging economies such as those of India or China. Environmental concerns with electricity generation, in specific the contribution of fossil fuel burning to climate change, have led to an increased focus on generation from renewable sources. In the power sector, wind and solar have become cost effective, speeding up an energy transition away from fossil fuels. The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed. Normally, demand of electricity must match the supply, as storage of electricity is difficult. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses. With increasing levels of variable renewable energy (wind and solar energy) in the grid, it has become more challenging to match supply and demand. Storage plays an increasing role in bridging that gap. There are four types of energy storage technologies, each in varying states of technology readiness: batteries (electrochemical storage), chemical storage such as hydrogen, thermal or mechanical (such as pumped hydropower). Electricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector. The resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate. Electrification is expected to play a major role in the decarbonisation of sectors that rely on direct fossil fuel burning, such as transport (using electric vehicles) and heating (using heat pumps). The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership. Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first transcontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process. Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain many billions of miniaturised transistors in a region only a few centimetres square. A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock—electrocution—is still used for judicial execution in some US states, though its use had become very rare by the end of the 20th century. Electricity is not a human invention, and may be observed in several forms in nature, notably lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is due to the natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when pressed. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal: when a piezoelectric material is subjected to an electric field it changes size slightly. Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon; these are electric fish in different orders. The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system, and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants. It is said that in the 1850s, British politician William Gladstone asked the scientist Michael Faraday why electricity was valuable. Faraday answered, "One day sir, you may tax it." However, according to Snopes.com, "the anecdote should be considered apocryphal, however, because it isn't mentioned in any accounts by Faraday or his contemporaries (letters, newspapers, or biographies) and only popped up well after Faraday's death." In the 19th and early 20th century, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature. This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. "Revitalization" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films. As the public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light, such as the workers who "finger death at their gloves' end as they piece and repiece the living wires" in Rudyard Kipling's 1907 poem Sons of Martha. Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books. The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers. With electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it required particular attention by popular culture only when it stops flowing, an event that usually signals disaster. The people who keep it flowing, such as the nameless hero of Jimmy Webb's song "Wichita Lineman" (1968), are still often cast as heroic, wizard-like figures.
[ { "paragraph_id": 0, "text": "Electricity is the set of physical phenomena associated with the presence and motion of matter possessing an electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others.", "title": "" }, { "paragraph_id": 1, "text": "The presence of either a positive or negative electric charge produces an electric field. The motion of electric charges is an electric current and produces a magnetic field. In most applications, Coulomb's law determines the force acting on an electric charge. Electric potential is the work done to move an electric charge from one point to another within an electric field, typically measured in volts.", "title": "" }, { "paragraph_id": 2, "text": "Electricity plays a central role in many modern technologies, serving in electric power where electric current is used to energise equipment, and in electronics dealing with electrical circuits involving active components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies.", "title": "" }, { "paragraph_id": 3, "text": "The study of electrical phenomena dates back to antiquity, with theoretical understanding progressing slowly until the 17th and 18th centuries. The development of the theory of electromagnetism in the 19th century marked significant progress, leading to electricity's industrial and residential application by electrical engineers by the century's end. This rapid expansion in electrical technology at the time was the driving force for the Second Industrial Revolution, with electricity's versatility driving transformations in industry and society. Electricity is integral to applications spanning transport, heating, lighting, communications, and computation, making it the foundation of modern industrial society.", "title": "" }, { "paragraph_id": 4, "text": "Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the \"Thunderer of the Nile\", and described them as the \"protectors\" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them.", "title": "History" }, { "paragraph_id": 5, "text": "Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature.", "title": "History" }, { "paragraph_id": 6, "text": "Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus (\"of amber\" or \"like amber\", from ἤλεκτρον, elektron, the Greek word for \"amber\") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words \"electric\" and \"electricity\", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.", "title": "History" }, { "paragraph_id": 7, "text": "Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges.", "title": "History" }, { "paragraph_id": 8, "text": "In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his \"On Physical Lines of Force\" in 1861 and 1862.", "title": "History" }, { "paragraph_id": 9, "text": "While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.", "title": "History" }, { "paragraph_id": 10, "text": "In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for \"his discovery of the law of the photoelectric effect\". The photoelectric effect is also employed in photocells such as can be found in solar panels.", "title": "History" }, { "paragraph_id": 11, "text": "The first solid-state device was the \"cat's-whisker detector\" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.", "title": "History" }, { "paragraph_id": 12, "text": "Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948.", "title": "History" }, { "paragraph_id": 13, "text": "The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity. A lightweight ball suspended by a fine thread can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract.", "title": "Concepts" }, { "paragraph_id": 14, "text": "The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them. The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 10 times that of the gravitational attraction pulling them together.", "title": "Concepts" }, { "paragraph_id": 15, "text": "Charge originates from certain types of subatomic particles, the most familiar carriers of which are the electron and proton. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire. The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other.", "title": "Concepts" }, { "paragraph_id": 16, "text": "The charge on electrons and protons is opposite in sign, hence an amount of charge may be expressed as being either negative or positive. By convention, the charge carried by electrons is deemed negative, and that by protons positive, a custom that originated with the work of Benjamin Franklin. The amount of charge is usually given the symbol Q and expressed in coulombs; each electron carries the same charge of approximately −1.6022×10 coulomb. The proton has a charge that is equal and opposite, and thus +1.6022×10 coulomb. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle.", "title": "Concepts" }, { "paragraph_id": 17, "text": "Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer.", "title": "Concepts" }, { "paragraph_id": 18, "text": "The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.", "title": "Concepts" }, { "paragraph_id": 19, "text": "By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.", "title": "Concepts" }, { "paragraph_id": 20, "text": "The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.", "title": "Concepts" }, { "paragraph_id": 21, "text": "Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840. One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment.", "title": "Concepts" }, { "paragraph_id": 22, "text": "In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised.", "title": "Concepts" }, { "paragraph_id": 23, "text": "The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.", "title": "Concepts" }, { "paragraph_id": 24, "text": "An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, it follows that an electric field is a vector field.", "title": "Concepts" }, { "paragraph_id": 25, "text": "The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves.", "title": "Concepts" }, { "paragraph_id": 26, "text": "A hollow conducting body carries all its charge on its outer surface. The field is therefore 0 at all places inside the body. This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects.", "title": "Concepts" }, { "paragraph_id": 27, "text": "The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.", "title": "Concepts" }, { "paragraph_id": 28, "text": "The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning strike to develop there, rather than to the building it serves to protect.", "title": "Concepts" }, { "paragraph_id": 29, "text": "The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity. This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated. The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage.", "title": "Concepts" }, { "paragraph_id": 30, "text": "For practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable.", "title": "Concepts" }, { "paragraph_id": 31, "text": "Electric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, since otherwise there would be a force along the surface of the conductor that would move the charge carriers to even the potential across the surface.", "title": "Concepts" }, { "paragraph_id": 32, "text": "The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together.", "title": "Concepts" }, { "paragraph_id": 33, "text": "Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's words were that \"the electric conflict acts in a revolving manner.\" The force also depended on the direction of the current, for if the flow was reversed, then the force did too.", "title": "Concepts" }, { "paragraph_id": 34, "text": "Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere.", "title": "Concepts" }, { "paragraph_id": 35, "text": "This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained.", "title": "Concepts" }, { "paragraph_id": 36, "text": "Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work.", "title": "Concepts" }, { "paragraph_id": 37, "text": "An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.", "title": "Concepts" }, { "paragraph_id": 38, "text": "The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli.", "title": "Concepts" }, { "paragraph_id": 39, "text": "The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp.", "title": "Concepts" }, { "paragraph_id": 40, "text": "The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it.", "title": "Concepts" }, { "paragraph_id": 41, "text": "The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one.", "title": "Concepts" }, { "paragraph_id": 42, "text": "Electric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second.", "title": "Concepts" }, { "paragraph_id": 43, "text": "Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean \"electric power in watts.\" The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is", "title": "Concepts" }, { "paragraph_id": 44, "text": "where", "title": "Concepts" }, { "paragraph_id": 45, "text": "Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.", "title": "Concepts" }, { "paragraph_id": 46, "text": "Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes digital switching possible, and electronics is widely used in information processing, telecommunications, and signal processing. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system.", "title": "Concepts" }, { "paragraph_id": 47, "text": "Today, most electronic devices use semiconductor components to perform electron control. The underlying principles that explain how semiconductors work are studied in solid state physics, whereas the design and construction of electronic circuits to solve practical problems are part of electronics engineering.", "title": "Concepts" }, { "paragraph_id": 48, "text": "Faraday's and Ampère's work showed that a time-varying magnetic field created an electric field, and a time-varying electric field created a magnetic field. Thus, when either field is changing in time, a field of the other is always induced. These variations are an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that in a vacuum such a wave would travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's equations, which unify light, fields, and charge are one of the great milestones of theoretical physics.", "title": "Concepts" }, { "paragraph_id": 49, "text": "The work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents and, via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances.", "title": "Concepts" }, { "paragraph_id": 50, "text": "In the 6th century BC the Greek philosopher Thales of Miletus experimented with amber rods: these were the first studies into the production of electricity. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electricity.", "title": "Production, storage and uses" }, { "paragraph_id": 51, "text": "Electrical power is usually generated by electro-mechanical generators. These can be driven by steam produced from fossil fuel combustion or the heat released from nuclear reactions, but also more directly from the kinetic energy of wind or flowing water. The steam turbine invented by Sir Charles Parsons in 1884 is still used to convert the thermal energy of steam into a rotary motion that can be used by electro-mechanical generators. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. Electricity generated by solar panels rely on a different mechanism: solar radiation is converted directly into electricity using the photovoltaic effect.", "title": "Production, storage and uses" }, { "paragraph_id": 52, "text": "Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century, a rate of growth that is now being experienced by emerging economies such as those of India or China.", "title": "Production, storage and uses" }, { "paragraph_id": 53, "text": "Environmental concerns with electricity generation, in specific the contribution of fossil fuel burning to climate change, have led to an increased focus on generation from renewable sources. In the power sector, wind and solar have become cost effective, speeding up an energy transition away from fossil fuels.", "title": "Production, storage and uses" }, { "paragraph_id": 54, "text": "The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed.", "title": "Production, storage and uses" }, { "paragraph_id": 55, "text": "Normally, demand of electricity must match the supply, as storage of electricity is difficult. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses. With increasing levels of variable renewable energy (wind and solar energy) in the grid, it has become more challenging to match supply and demand. Storage plays an increasing role in bridging that gap. There are four types of energy storage technologies, each in varying states of technology readiness: batteries (electrochemical storage), chemical storage such as hydrogen, thermal or mechanical (such as pumped hydropower).", "title": "Production, storage and uses" }, { "paragraph_id": 56, "text": "Electricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector.", "title": "Production, storage and uses" }, { "paragraph_id": 57, "text": "The resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate. Electrification is expected to play a major role in the decarbonisation of sectors that rely on direct fossil fuel burning, such as transport (using electric vehicles) and heating (using heat pumps).", "title": "Production, storage and uses" }, { "paragraph_id": 58, "text": "The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership.", "title": "Production, storage and uses" }, { "paragraph_id": 59, "text": "Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first transcontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process.", "title": "Production, storage and uses" }, { "paragraph_id": 60, "text": "Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain many billions of miniaturised transistors in a region only a few centimetres square.", "title": "Production, storage and uses" }, { "paragraph_id": 61, "text": "A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock—electrocution—is still used for judicial execution in some US states, though its use had become very rare by the end of the 20th century.", "title": "Electricity and the natural world" }, { "paragraph_id": 62, "text": "Electricity is not a human invention, and may be observed in several forms in nature, notably lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is due to the natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when pressed. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal: when a piezoelectric material is subjected to an electric field it changes size slightly.", "title": "Electricity and the natural world" }, { "paragraph_id": 63, "text": "Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon; these are electric fish in different orders. The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system, and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants.", "title": "Electricity and the natural world" }, { "paragraph_id": 64, "text": "It is said that in the 1850s, British politician William Gladstone asked the scientist Michael Faraday why electricity was valuable. Faraday answered, \"One day sir, you may tax it.\" However, according to Snopes.com, \"the anecdote should be considered apocryphal, however, because it isn't mentioned in any accounts by Faraday or his contemporaries (letters, newspapers, or biographies) and only popped up well after Faraday's death.\"", "title": "Cultural perception" }, { "paragraph_id": 65, "text": "In the 19th and early 20th century, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature. This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. \"Revitalization\" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films.", "title": "Cultural perception" }, { "paragraph_id": 66, "text": "As the public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light, such as the workers who \"finger death at their gloves' end as they piece and repiece the living wires\" in Rudyard Kipling's 1907 poem Sons of Martha. Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books. The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers.", "title": "Cultural perception" }, { "paragraph_id": 67, "text": "With electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it required particular attention by popular culture only when it stops flowing, an event that usually signals disaster. The people who keep it flowing, such as the nameless hero of Jimmy Webb's song \"Wichita Lineman\" (1968), are still often cast as heroic, wizard-like figures.", "title": "Cultural perception" }, { "paragraph_id": 68, "text": "", "title": "External links" } ]
Electricity is the set of physical phenomena associated with the presence and motion of matter possessing an electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others. The presence of either a positive or negative electric charge produces an electric field. The motion of electric charges is an electric current and produces a magnetic field. In most applications, Coulomb's law determines the force acting on an electric charge. Electric potential is the work done to move an electric charge from one point to another within an electric field, typically measured in volts. Electricity plays a central role in many modern technologies, serving in electric power where electric current is used to energise equipment, and in electronics dealing with electrical circuits involving active components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies. The study of electrical phenomena dates back to antiquity, with theoretical understanding progressing slowly until the 17th and 18th centuries. The development of the theory of electromagnetism in the 19th century marked significant progress, leading to electricity's industrial and residential application by electrical engineers by the century's end. This rapid expansion in electrical technology at the time was the driving force for the Second Industrial Revolution, with electricity's versatility driving transformations in industry and society. Electricity is integral to applications spanning transport, heating, lighting, communications, and computation, making it the foundation of modern industrial society.
2001-08-28T23:03:03Z
2023-12-21T16:02:16Z
[ "Template:Citation", "Template:Wikiversity", "Template:Hatnote group", "Template:Reflist", "Template:Cite journal", "Template:Short description", "Template:Wikiquote", "Template:Commons category", "Template:Polarization states", "Template:Pp-semi", "Template:See also", "Template:Rp", "Template:Efn", "Template:Portal", "Template:Notelist", "Template:Cite news", "Template:Annotated link", "Template:Good article", "Template:Electromagnetism", "Template:Main", "Template:RP", "Template:Cite book", "Template:Cite web", "Template:Wiktionary", "Template:Footer energy", "Template:Authority control", "Template:Use dmy dates" ]
https://en.wikipedia.org/wiki/Electricity
9,553
Empedocles
Empedocles (/ɛmˈpɛdəkliːz/; Greek: Ἐμπεδοκλῆς; c. 494 – c. 434 BC, fl. 444–443 BC) was a Greek pre-Socratic philosopher and a native citizen of Akragas, a Greek city in Sicily. Empedocles' philosophy is best known for originating the cosmogonic theory of the four classical elements. He also proposed forces he called Love and Strife which would mix and separate the elements, respectively. Empedocles challenged the practice of animal sacrifice and killing animals for food. He developed a distinctive doctrine of reincarnation. He is generally considered the last Greek philosopher to have recorded his ideas in verse. Some of his work survives, more than is the case for any other pre-Socratic philosopher. Empedocles' death was mythologized by ancient writers, and has been the subject of a number of literary treatments. Although the exact dates of Empedocles' birth and death are unknown and ancient accounts of his life conflict on the exact details, they agree that he was born in the early 5th century BC in the Greek city of Akragas in Magna Graecia, present-day Sicily. Modern scholars believe the accuracy of the accounts that he came from a rich and noble family and that his grandfather, also named Empedocles, had won a victory in the horse race at Olympia in the 71st. Olympiad (496–495 BC), Little else can be determined with accuracy. Primary sources of information on the life of Empedocles come from the Hellenistic period, several centuries after his own death and long after any reliable evidence about his life would have perished. Modern scholarship generally believes that these biographical details, including Aristotle's assertion that he was the "father of rhetoric", his chronologically impossible tutelage under Pythagoras, and his employment as a doctor and miracle worker, were fabricated from interpretations of Empedocles' poetry, as was common practice for the biographies written during this time. According to Aristotle, Empedocles died at the age of 60 (c. 430 BC), even though other writers have him living up to the age of 109. Likewise, there are myths concerning his death: a tradition, which is traced to Heraclides Ponticus, represented him as having been removed from the Earth; whereas others had him perishing in the flames of Mount Etna. Diogenes Laërtius records the legend that Empedocles died by throwing himself into Mount Etna in Sicily, so that the people would believe his body had vanished and he had turned into an immortal god; the volcano, however, threw back one of his bronze sandals, revealing the deceit. Another legend maintains that he threw himself into the volcano to prove to his disciples that he was immortal; he believed he would come back as a god after being consumed by the fire. Lucretius speaks of him with enthusiasm, and evidently viewed him as his model. Horace also refers to the death of Empedocles in his work Ars Poetica and admits poets the right to destroy themselves. In Icaro-Menippus [it], a comedic dialogue written by the second-century satirist Lucian of Samosata, Empedocles' final fate is re-evaluated. Rather than being incinerated in the fires of Mount Etna, he was carried up into the heavens by a volcanic eruption. Although singed by the ordeal, Empedocles survives and continues his life on the Moon, surviving by feeding on dew. Burnet states that Empedocles likely did not die in Sicily, that both the positive story of Empedocles being taken up to heaven and the negative one about him throwing himself into a volcano could be easily accepted by ancient writers, as there was no local tradition to contradict them. Empedocles' death is the subject of Friedrich Hölderlin's play Tod des Empedokles (The Death of Empedocles) as well as Matthew Arnold's poem Empedocles on Etna. Based on the surviving fragments of his work, modern scholars generally believe that Empedocles was directly responding to Parmenides' doctrine of monism and was likely acquainted with the work of Anaxagoras, although it is unlikely he was aware of either the later Eleatics or the doctrines of the Atomists. Many later accounts of his life claim that Empedocles studied with the Pythagoreans on the basis of his doctrine of reincarnation, although he may have instead learned this from a local tradition rather than directly from the Pythagoreans. Empedocles established four ultimate elements which make all the structures in the world—fire, air, water, earth. Empedocles called these four elements "roots", which he also identified with the mythical names of Zeus, Hera, Nestis, and Aidoneus (e.g., "Now hear the fourfold roots of everything: enlivening Hera, Hades, shining Zeus. And Nestis, moistening mortal springs with tears"). Empedocles never used the term "element" (στοιχεῖον, stoicheion), which seems to have been first used by Plato. According to the different proportions in which these four indestructible and unchangeable elements are combined with each other the difference of the structure is produced. It is in the aggregation and segregation of elements thus arising, that Empedocles, like the atomists, found the real process which corresponds to what is popularly termed growth, increase or decrease. One interpreter describes his philosophy as asserting that "Nothing new comes or can come into being; the only change that can occur is a change in the juxtaposition of element with element." This theory of the four elements became the standard dogma for the next two thousand years. The four elements, however, are simple, eternal, and unalterable, and as change is the consequence of their mixture and separation, it was also necessary to suppose the existence of moving powers that bring about mixture and separation. The four elements are both eternally brought into union and parted from one another by two divine powers, Love and Strife (Philotes and Neikos). Love (φιλότης) is responsible for the attraction of different forms of what we now call matter, and Strife (νεῖκος) is the cause of their separation. If the four elements make up the universe, then Love and Strife explain their variation and harmony. Love and Strife are attractive and repulsive forces, respectively, which are plainly observable in human behavior, but also pervade the universe. The two forces wax and wane in their dominance, but neither force ever wholly escapes the imposition of the other. As the best and original state, there was a time when the pure elements and the two powers co-existed in a condition of rest and inertness in the form of a sphere. The elements existed together in their purity, without mixture and separation, and the uniting power of Love predominated in the sphere: the separating power of Strife guarded the extreme edges of the sphere. Since that time, strife gained more sway and the bond which kept the pure elementary substances together in the sphere was dissolved. The elements became the world of phenomena we see today, full of contrasts and oppositions, operated on by both Love and Strife. Empedocles assumed a cyclical universe whereby the elements return and prepare the formation of the sphere for the next period of the universe. Empedocles attempted to explain the separation of elements, the formation of earth and sea, of Sun and Moon, of atmosphere. He also dealt with the first origin of plants and animals, and with the physiology of humans. As the elements entered into combinations, there appeared strange results—heads without necks, arms without shoulders. Then as these fragmentary structures met, there were seen horned heads on human bodies, bodies of oxen with human heads, and figures of double sex. But most of these products of natural forces disappeared as suddenly as they arose; only in those rare cases where the parts were found to be adapted to each other did the complex structures last. Thus the organic universe sprang from spontaneous aggregations that suited each other as if this had been intended. Soon various influences reduced creatures of double sex to a male and a female, and the world was replenished with organic life. Like Pythagoras, Empedocles believed in the transmigration of the soul or metempsychosis, that souls can be reincarnated between humans, animals and even plants. According to him, all humans, or maybe only a selected few among them, were originally long-lived daimons who dwelt in a state of bliss until committing an unspecified crime, possibly bloodshed or perjury. As a consequence, they fell to Earth, where they would forced to spend 30,000 cycles of metempsychosis through different bodies before being able to return to the sphere of divinity. One's behavior during his lifetime would also determine his next incarnation. Wise people, who have learned the secret of life, are closer to the divine, while their souls similarly closer are to the freedom from the cycle of reincarnations, after which they are able to rest in happiness for eternity. This cycle of mortal incarnation seems to have been inspired by the god Apollo's punishment as a servant to Admetus. Empedocles was a vegetarian and advocated vegetarianism, since the bodies of animals are also dwelling places of punished souls. For Empedocles, all living things were on the same spiritual plane; plants and animals are links in a chain where humans are a link too. Empedocles is credited with the first comprehensive theory of light and vision. Historian Will Durant noted that "Empedocles suggested that light takes time to pass from one point to another." He put forward the idea that we see objects because light streams out of our eyes and touches them. While flawed, this became the fundamental basis on which later Greek philosophers and mathematicians like Euclid would construct some of the most important theories of light, vision, and optics. Knowledge is explained by the principle that elements in the things outside us are perceived by the corresponding elements in ourselves. Like is known by like. The whole body is full of pores and hence respiration takes place over the whole frame. In the organs of sense these pores are specially adapted to receive the effluences which are continually rising from bodies around us; thus perception occurs. In vision, certain particles go forth from the eye to meet similar particles given forth from the object, and the resultant contact constitutes vision. Perception is not merely a passive reflection of external objects. Empedocles also attempted to explain the phenomenon of respiration by means of an elaborate analogy with the clepsydra, an ancient device for conveying liquids from one vessel to another. This fragment has sometimes been connected to a passage in Aristotle's Physics where Aristotle refers to people who twisted wineskins and captured air in clepsydras to demonstrate that void does not exist. The fragment certainly implies that Empedocles knew about the corporeality of air, but he says nothing whatever about the void, and there is no evidence that Empedocles performed any experiment with clepsydras. According to Diogenes Laertius, Empedocles wrote two poems, one "On Nature" and the other "On Purifications" which together comprised 5000 lines. However, only approximately 550 lines of his poetry survive, quoted in fragments by later ancient sources. In the old editions of Empedocles, about 450 lines were ascribed to "On Nature" which outlined his philosophical system, and explains not only the nature and history of the universe, including his theory of the four classical elements, but also theories on causation, perception, and thought, as well as explanations of terrestrial phenomena and biological processes. The other 100 lines were typically ascribed to his "Purifications", which was taken to be a poem about ritual purification, or the poem that contained all his religious and ethical thought, which early editors supposed that it was a poem that offered a mythical account of the world which may, nevertheless, have been part of Empedocles' philosophical system. However, with the discovery of the Strasbourg papyrus, which contains a large section of "On Nature" that includes many lines that were formerly attributed to "On Purifications" there is now considerable debate about whether the surviving fragments of his teaching should be attributed to two separate poems, with different subject matter, or whether they may all derive from one poem with two titles, or whether one title refers to part of the whole poem.
[ { "paragraph_id": 0, "text": "Empedocles (/ɛmˈpɛdəkliːz/; Greek: Ἐμπεδοκλῆς; c. 494 – c. 434 BC, fl. 444–443 BC) was a Greek pre-Socratic philosopher and a native citizen of Akragas, a Greek city in Sicily. Empedocles' philosophy is best known for originating the cosmogonic theory of the four classical elements. He also proposed forces he called Love and Strife which would mix and separate the elements, respectively.", "title": "" }, { "paragraph_id": 1, "text": "Empedocles challenged the practice of animal sacrifice and killing animals for food. He developed a distinctive doctrine of reincarnation. He is generally considered the last Greek philosopher to have recorded his ideas in verse. Some of his work survives, more than is the case for any other pre-Socratic philosopher. Empedocles' death was mythologized by ancient writers, and has been the subject of a number of literary treatments.", "title": "" }, { "paragraph_id": 2, "text": "Although the exact dates of Empedocles' birth and death are unknown and ancient accounts of his life conflict on the exact details, they agree that he was born in the early 5th century BC in the Greek city of Akragas in Magna Graecia, present-day Sicily. Modern scholars believe the accuracy of the accounts that he came from a rich and noble family and that his grandfather, also named Empedocles, had won a victory in the horse race at Olympia in the 71st. Olympiad (496–495 BC), Little else can be determined with accuracy.", "title": "Life" }, { "paragraph_id": 3, "text": "Primary sources of information on the life of Empedocles come from the Hellenistic period, several centuries after his own death and long after any reliable evidence about his life would have perished. Modern scholarship generally believes that these biographical details, including Aristotle's assertion that he was the \"father of rhetoric\", his chronologically impossible tutelage under Pythagoras, and his employment as a doctor and miracle worker, were fabricated from interpretations of Empedocles' poetry, as was common practice for the biographies written during this time.", "title": "Life" }, { "paragraph_id": 4, "text": "According to Aristotle, Empedocles died at the age of 60 (c. 430 BC), even though other writers have him living up to the age of 109. Likewise, there are myths concerning his death: a tradition, which is traced to Heraclides Ponticus, represented him as having been removed from the Earth; whereas others had him perishing in the flames of Mount Etna. Diogenes Laërtius records the legend that Empedocles died by throwing himself into Mount Etna in Sicily, so that the people would believe his body had vanished and he had turned into an immortal god; the volcano, however, threw back one of his bronze sandals, revealing the deceit. Another legend maintains that he threw himself into the volcano to prove to his disciples that he was immortal; he believed he would come back as a god after being consumed by the fire. Lucretius speaks of him with enthusiasm, and evidently viewed him as his model. Horace also refers to the death of Empedocles in his work Ars Poetica and admits poets the right to destroy themselves. In Icaro-Menippus [it], a comedic dialogue written by the second-century satirist Lucian of Samosata, Empedocles' final fate is re-evaluated. Rather than being incinerated in the fires of Mount Etna, he was carried up into the heavens by a volcanic eruption. Although singed by the ordeal, Empedocles survives and continues his life on the Moon, surviving by feeding on dew.", "title": "Life" }, { "paragraph_id": 5, "text": "Burnet states that Empedocles likely did not die in Sicily, that both the positive story of Empedocles being taken up to heaven and the negative one about him throwing himself into a volcano could be easily accepted by ancient writers, as there was no local tradition to contradict them.", "title": "Life" }, { "paragraph_id": 6, "text": "Empedocles' death is the subject of Friedrich Hölderlin's play Tod des Empedokles (The Death of Empedocles) as well as Matthew Arnold's poem Empedocles on Etna.", "title": "Life" }, { "paragraph_id": 7, "text": "Based on the surviving fragments of his work, modern scholars generally believe that Empedocles was directly responding to Parmenides' doctrine of monism and was likely acquainted with the work of Anaxagoras, although it is unlikely he was aware of either the later Eleatics or the doctrines of the Atomists. Many later accounts of his life claim that Empedocles studied with the Pythagoreans on the basis of his doctrine of reincarnation, although he may have instead learned this from a local tradition rather than directly from the Pythagoreans.", "title": "Philosophy" }, { "paragraph_id": 8, "text": "Empedocles established four ultimate elements which make all the structures in the world—fire, air, water, earth. Empedocles called these four elements \"roots\", which he also identified with the mythical names of Zeus, Hera, Nestis, and Aidoneus (e.g., \"Now hear the fourfold roots of everything: enlivening Hera, Hades, shining Zeus. And Nestis, moistening mortal springs with tears\"). Empedocles never used the term \"element\" (στοιχεῖον, stoicheion), which seems to have been first used by Plato. According to the different proportions in which these four indestructible and unchangeable elements are combined with each other the difference of the structure is produced. It is in the aggregation and segregation of elements thus arising, that Empedocles, like the atomists, found the real process which corresponds to what is popularly termed growth, increase or decrease. One interpreter describes his philosophy as asserting that \"Nothing new comes or can come into being; the only change that can occur is a change in the juxtaposition of element with element.\" This theory of the four elements became the standard dogma for the next two thousand years.", "title": "Philosophy" }, { "paragraph_id": 9, "text": "The four elements, however, are simple, eternal, and unalterable, and as change is the consequence of their mixture and separation, it was also necessary to suppose the existence of moving powers that bring about mixture and separation. The four elements are both eternally brought into union and parted from one another by two divine powers, Love and Strife (Philotes and Neikos). Love (φιλότης) is responsible for the attraction of different forms of what we now call matter, and Strife (νεῖκος) is the cause of their separation. If the four elements make up the universe, then Love and Strife explain their variation and harmony. Love and Strife are attractive and repulsive forces, respectively, which are plainly observable in human behavior, but also pervade the universe. The two forces wax and wane in their dominance, but neither force ever wholly escapes the imposition of the other.", "title": "Philosophy" }, { "paragraph_id": 10, "text": "As the best and original state, there was a time when the pure elements and the two powers co-existed in a condition of rest and inertness in the form of a sphere. The elements existed together in their purity, without mixture and separation, and the uniting power of Love predominated in the sphere: the separating power of Strife guarded the extreme edges of the sphere. Since that time, strife gained more sway and the bond which kept the pure elementary substances together in the sphere was dissolved. The elements became the world of phenomena we see today, full of contrasts and oppositions, operated on by both Love and Strife. Empedocles assumed a cyclical universe whereby the elements return and prepare the formation of the sphere for the next period of the universe.", "title": "Philosophy" }, { "paragraph_id": 11, "text": "Empedocles attempted to explain the separation of elements, the formation of earth and sea, of Sun and Moon, of atmosphere. He also dealt with the first origin of plants and animals, and with the physiology of humans. As the elements entered into combinations, there appeared strange results—heads without necks, arms without shoulders. Then as these fragmentary structures met, there were seen horned heads on human bodies, bodies of oxen with human heads, and figures of double sex. But most of these products of natural forces disappeared as suddenly as they arose; only in those rare cases where the parts were found to be adapted to each other did the complex structures last. Thus the organic universe sprang from spontaneous aggregations that suited each other as if this had been intended. Soon various influences reduced creatures of double sex to a male and a female, and the world was replenished with organic life.", "title": "Philosophy" }, { "paragraph_id": 12, "text": "Like Pythagoras, Empedocles believed in the transmigration of the soul or metempsychosis, that souls can be reincarnated between humans, animals and even plants. According to him, all humans, or maybe only a selected few among them, were originally long-lived daimons who dwelt in a state of bliss until committing an unspecified crime, possibly bloodshed or perjury. As a consequence, they fell to Earth, where they would forced to spend 30,000 cycles of metempsychosis through different bodies before being able to return to the sphere of divinity. One's behavior during his lifetime would also determine his next incarnation. Wise people, who have learned the secret of life, are closer to the divine, while their souls similarly closer are to the freedom from the cycle of reincarnations, after which they are able to rest in happiness for eternity. This cycle of mortal incarnation seems to have been inspired by the god Apollo's punishment as a servant to Admetus.", "title": "Philosophy" }, { "paragraph_id": 13, "text": "Empedocles was a vegetarian and advocated vegetarianism, since the bodies of animals are also dwelling places of punished souls. For Empedocles, all living things were on the same spiritual plane; plants and animals are links in a chain where humans are a link too.", "title": "Philosophy" }, { "paragraph_id": 14, "text": "Empedocles is credited with the first comprehensive theory of light and vision. Historian Will Durant noted that \"Empedocles suggested that light takes time to pass from one point to another.\" He put forward the idea that we see objects because light streams out of our eyes and touches them. While flawed, this became the fundamental basis on which later Greek philosophers and mathematicians like Euclid would construct some of the most important theories of light, vision, and optics.", "title": "Philosophy" }, { "paragraph_id": 15, "text": "Knowledge is explained by the principle that elements in the things outside us are perceived by the corresponding elements in ourselves. Like is known by like. The whole body is full of pores and hence respiration takes place over the whole frame. In the organs of sense these pores are specially adapted to receive the effluences which are continually rising from bodies around us; thus perception occurs. In vision, certain particles go forth from the eye to meet similar particles given forth from the object, and the resultant contact constitutes vision. Perception is not merely a passive reflection of external objects.", "title": "Philosophy" }, { "paragraph_id": 16, "text": "Empedocles also attempted to explain the phenomenon of respiration by means of an elaborate analogy with the clepsydra, an ancient device for conveying liquids from one vessel to another. This fragment has sometimes been connected to a passage in Aristotle's Physics where Aristotle refers to people who twisted wineskins and captured air in clepsydras to demonstrate that void does not exist. The fragment certainly implies that Empedocles knew about the corporeality of air, but he says nothing whatever about the void, and there is no evidence that Empedocles performed any experiment with clepsydras.", "title": "Philosophy" }, { "paragraph_id": 17, "text": "According to Diogenes Laertius, Empedocles wrote two poems, one \"On Nature\" and the other \"On Purifications\" which together comprised 5000 lines. However, only approximately 550 lines of his poetry survive, quoted in fragments by later ancient sources.", "title": "Writings" }, { "paragraph_id": 18, "text": "In the old editions of Empedocles, about 450 lines were ascribed to \"On Nature\" which outlined his philosophical system, and explains not only the nature and history of the universe, including his theory of the four classical elements, but also theories on causation, perception, and thought, as well as explanations of terrestrial phenomena and biological processes. The other 100 lines were typically ascribed to his \"Purifications\", which was taken to be a poem about ritual purification, or the poem that contained all his religious and ethical thought, which early editors supposed that it was a poem that offered a mythical account of the world which may, nevertheless, have been part of Empedocles' philosophical system.", "title": "Writings" }, { "paragraph_id": 19, "text": "However, with the discovery of the Strasbourg papyrus, which contains a large section of \"On Nature\" that includes many lines that were formerly attributed to \"On Purifications\" there is now considerable debate about whether the surviving fragments of his teaching should be attributed to two separate poems, with different subject matter, or whether they may all derive from one poem with two titles, or whether one title refers to part of the whole poem.", "title": "Writings" } ]
Empedocles was a Greek pre-Socratic philosopher and a native citizen of Akragas, a Greek city in Sicily. Empedocles' philosophy is best known for originating the cosmogonic theory of the four classical elements. He also proposed forces he called Love and Strife which would mix and separate the elements, respectively. Empedocles challenged the practice of animal sacrifice and killing animals for food. He developed a distinctive doctrine of reincarnation. He is generally considered the last Greek philosopher to have recorded his ideas in verse. Some of his work survives, more than is the case for any other pre-Socratic philosopher. Empedocles' death was mythologized by ancient writers, and has been the subject of a number of literary treatments.
2001-09-07T23:44:15Z
2023-12-13T11:06:10Z
[ "Template:Lang-grc-gre", "Template:Fl.", "Template:Cite LotEP", "Template:Cite encyclopedia", "Template:Vegetarianism", "Template:Circa", "Template:Cite web", "Template:Greek schools of philosophy", "Template:Cite IEP", "Template:Library resources box", "Template:Commons category", "Template:Webarchive", "Template:Internet Archive author", "Template:Interlanguage link", "Template:Cite EB1911", "Template:Lang", "Template:Authority control", "Template:IPAc-en", "Template:Sfn", "Template:Better source needed", "Template:Wikiquote", "Template:Wikisource author", "Template:Librivox author", "Template:Other uses", "Template:See also", "Template:Infobox philosopher", "Template:Cite book", "Template:Cite SEP", "Template:Short description", "Template:Use dmy dates", "Template:Reflist", "Template:MacTutor Biography", "Template:Efn", "Template:Notelist" ]
https://en.wikipedia.org/wiki/Empedocles
9,555
Ericaceae
The Ericaceae (/ˌɛrɪˈkeɪsi.aɪ, -iː/) are a family of flowering plants, commonly known as the heath or heather family, found most commonly in acidic and infertile growing conditions. The family is large, with c. 4250 known species spread across 124 genera, making it the 14th most species-rich family of flowering plants. The many well known and economically important members of the Ericaceae include the cranberry, blueberry, huckleberry, rhododendron (including azaleas), and various common heaths and heathers (Erica, Cassiope, Daboecia, and Calluna for example). The Ericaceae contain a morphologically diverse range of taxa, including herbs, dwarf shrubs, shrubs, and trees. Their leaves are usually evergreen, alternate or whorled, simple and without stipules. Their flowers are hermaphrodite and show considerable variability. The petals are often fused (sympetalous) with shapes ranging from narrowly tubular to funnelform or widely urn-shaped. The corollas are usually radially symmetrical (actinomorphic) and urn-shaped, but many flowers of the genus Rhododendron are somewhat bilaterally symmetrical (zygomorphic). Anthers open by pores. Michel Adanson used the term Vaccinia to describe a similar family, but Antoine Laurent de Jussieu first used the term Ericaceae. The name comes from the type genus Erica, which appears to be derived from the Greek word ereíkē (ἐρείκη). The exact meaning is difficult to interpret, but some sources show it as meaning 'heather'. The name may have been used informally to refer to the plants before Linnaean times, and simply been formalised when Linnaeus described Erica in 1753, and then again when Jussieu described the Ericaceae in 1789. Historically, the Ericaceae included both subfamilies and tribes. In 1971, Stevens, who outlined the history from 1876 and in some instances 1839, recognised six subfamilies (Rhododendroideae, Ericoideae, Vaccinioideae, Pyroloideae, Monotropoideae, and Wittsteinioideae), and further subdivided four of the subfamilies into tribes, the Rhododendroideae having seven tribes (Bejarieae, Rhodoreae, Cladothamneae, Epigaeae, Phyllodoceae, and Diplarcheae). Within tribe Rhodoreae, five genera were described, Rhododendron L. (including Azalea L. pro parte), Therorhodion Small, Ledum L., Tsusiophyllum Max., Menziesia J. E. Smith, that were eventually transferred into Rhododendron, along with Diplarche from the monogeneric tribe Diplarcheae. In 2002, systematic research resulted in the inclusion of the formerly recognised families Empetraceae, Epacridaceae, Monotropaceae, Prionotaceae, and Pyrolaceae into the Ericaceae based on a combination of molecular, morphological, anatomical, and embryological data, analysed within a phylogenetic framework. The move significantly increased the morphological and geographical range found within the group. One possible classification of the resulting family includes 9 subfamilies, 126 genera, and about 4000 species: The Ericaceae have a nearly worldwide distribution. They are absent from continental Antarctica, parts of the high Arctic, central Greenland, northern and central Australia, and much of the lowland tropics and neotropics. The family is largely composed of plants that can tolerate acidic, infertile, shady conditions. Due to their tolerance of acidic conditions, this plant family is also typical of peat bogs and blanket bogs; examples include Rhododendron groenlandicum and species in the genus Kalmia. In eastern North America, members of this family often grow in association with an oak canopy, in a habitat known as an oak-heath forest. Plants in Ericaceae, especially species in Vaccinium, rely on buzz pollination for successful pollination to occur. The majority of ornamental species from Rhododendron are native to East Asia, but most varieties cultivated today are hybrids. Most rhododendrons grown in the United States are cultivated in the Pacific Northwest. The United States is the top producer of both blueberries and cranberries, with the state of Maine growing the majority of lowbush blueberry. The wide distribution of genera within Ericaceae has led to situations in which there are both American and European plants with the same name - for example, blueberry: Vaccinium corymbosum in North America, and Vaccinium myrtillus in Europe; and cranberry: Vaccinium macrocarpon in America, and Vaccinium oxycoccos in Europe. Like other stress-tolerant plants, many Ericaceae have mycorrhizal fungi to assist with extracting nutrients from infertile soils, as well as evergreen foliage to conserve absorbed nutrients. This trait is not found in the Clethraceae and Cyrillaceae, the two families most closely related to the Ericaceae. Most Ericaceae (excluding the Monotropoideae, and some Epacridoideae) form a distinctive accumulation of mycorrhizae, in which fungi grow in and around the roots and provide the plant with nutrients. The Pyroloideae are mixotrophic and gain sugars from the mycorrhizae, as well as nutrients. The cultivation of blueberries, cranberries, and wintergreen for their fruit and oils relies especially on these unique relationships with fungi, as a healthy mycorrhizal network in the soil helps the plants to resist environmental stresses that might otherwise damage crop yield. Ericoid mycorrhizae are responsible for a high rate of uptake of nitrogen, which causes naturally low levels of free nitrogen in ericoid soils. These mycorrhizal fungi may also increase the tolerance of Ericaceae to heavy metals in soil, and may cause plants to grow faster by producing phytohormones. In many parts of the world, a "heath" or "heathland" is an environment characterised by an open dwarf-shrub community found on low-quality acidic soils, generally dominated by plants in Ericaceae. Heathlands are a broadly anthropogenic habitat, requiring regular grazing or burning to prevent succession. Heaths are particularly abundant - and constitute important cultural elements - in Norway, the United Kingdom, the Netherlands, Germany, Spain, Portugal, and other countries in Central and Western Europe. The most common examples of plants in Ericaceae which dominate heathlands are Calluna vulgaris, Erica cineria, Erica tetralix, and Vaccinium myrtillus. In heathland, plants in Ericaceae serve as host plants to the butterfly Plebejus argus. Other insects, such as Saturnia pavonia, Myrmeleotettix maculatus, Metrioptera brachyptera, and Picromerus bidens are closely associated with heathland environments. Reptiles thrive in heaths due to an abundance of sunlight and prey, and birds hunt the insects and reptiles which are present. Some evidence suggests eutrophic rainwater can convert ericoid heaths with species such as Erica tetralix to grasslands. Nitrogen is particularly suspect in this regard, and may be causing measurable changes to the distribution and abundance of some ericaceous species.
[ { "paragraph_id": 0, "text": "The Ericaceae (/ˌɛrɪˈkeɪsi.aɪ, -iː/) are a family of flowering plants, commonly known as the heath or heather family, found most commonly in acidic and infertile growing conditions. The family is large, with c. 4250 known species spread across 124 genera, making it the 14th most species-rich family of flowering plants. The many well known and economically important members of the Ericaceae include the cranberry, blueberry, huckleberry, rhododendron (including azaleas), and various common heaths and heathers (Erica, Cassiope, Daboecia, and Calluna for example).", "title": "" }, { "paragraph_id": 1, "text": "The Ericaceae contain a morphologically diverse range of taxa, including herbs, dwarf shrubs, shrubs, and trees. Their leaves are usually evergreen, alternate or whorled, simple and without stipules. Their flowers are hermaphrodite and show considerable variability. The petals are often fused (sympetalous) with shapes ranging from narrowly tubular to funnelform or widely urn-shaped. The corollas are usually radially symmetrical (actinomorphic) and urn-shaped, but many flowers of the genus Rhododendron are somewhat bilaterally symmetrical (zygomorphic). Anthers open by pores.", "title": "Description" }, { "paragraph_id": 2, "text": "Michel Adanson used the term Vaccinia to describe a similar family, but Antoine Laurent de Jussieu first used the term Ericaceae. The name comes from the type genus Erica, which appears to be derived from the Greek word ereíkē (ἐρείκη). The exact meaning is difficult to interpret, but some sources show it as meaning 'heather'. The name may have been used informally to refer to the plants before Linnaean times, and simply been formalised when Linnaeus described Erica in 1753, and then again when Jussieu described the Ericaceae in 1789.", "title": "Taxonomy" }, { "paragraph_id": 3, "text": "Historically, the Ericaceae included both subfamilies and tribes. In 1971, Stevens, who outlined the history from 1876 and in some instances 1839, recognised six subfamilies (Rhododendroideae, Ericoideae, Vaccinioideae, Pyroloideae, Monotropoideae, and Wittsteinioideae), and further subdivided four of the subfamilies into tribes, the Rhododendroideae having seven tribes (Bejarieae, Rhodoreae, Cladothamneae, Epigaeae, Phyllodoceae, and Diplarcheae). Within tribe Rhodoreae, five genera were described, Rhododendron L. (including Azalea L. pro parte), Therorhodion Small, Ledum L., Tsusiophyllum Max., Menziesia J. E. Smith, that were eventually transferred into Rhododendron, along with Diplarche from the monogeneric tribe Diplarcheae.", "title": "Taxonomy" }, { "paragraph_id": 4, "text": "In 2002, systematic research resulted in the inclusion of the formerly recognised families Empetraceae, Epacridaceae, Monotropaceae, Prionotaceae, and Pyrolaceae into the Ericaceae based on a combination of molecular, morphological, anatomical, and embryological data, analysed within a phylogenetic framework. The move significantly increased the morphological and geographical range found within the group. One possible classification of the resulting family includes 9 subfamilies, 126 genera, and about 4000 species:", "title": "Taxonomy" }, { "paragraph_id": 5, "text": "The Ericaceae have a nearly worldwide distribution. They are absent from continental Antarctica, parts of the high Arctic, central Greenland, northern and central Australia, and much of the lowland tropics and neotropics.", "title": "Distribution and ecology" }, { "paragraph_id": 6, "text": "The family is largely composed of plants that can tolerate acidic, infertile, shady conditions. Due to their tolerance of acidic conditions, this plant family is also typical of peat bogs and blanket bogs; examples include Rhododendron groenlandicum and species in the genus Kalmia. In eastern North America, members of this family often grow in association with an oak canopy, in a habitat known as an oak-heath forest. Plants in Ericaceae, especially species in Vaccinium, rely on buzz pollination for successful pollination to occur.", "title": "Distribution and ecology" }, { "paragraph_id": 7, "text": "The majority of ornamental species from Rhododendron are native to East Asia, but most varieties cultivated today are hybrids. Most rhododendrons grown in the United States are cultivated in the Pacific Northwest. The United States is the top producer of both blueberries and cranberries, with the state of Maine growing the majority of lowbush blueberry. The wide distribution of genera within Ericaceae has led to situations in which there are both American and European plants with the same name - for example, blueberry: Vaccinium corymbosum in North America, and Vaccinium myrtillus in Europe; and cranberry: Vaccinium macrocarpon in America, and Vaccinium oxycoccos in Europe.", "title": "Distribution and ecology" }, { "paragraph_id": 8, "text": "Like other stress-tolerant plants, many Ericaceae have mycorrhizal fungi to assist with extracting nutrients from infertile soils, as well as evergreen foliage to conserve absorbed nutrients. This trait is not found in the Clethraceae and Cyrillaceae, the two families most closely related to the Ericaceae. Most Ericaceae (excluding the Monotropoideae, and some Epacridoideae) form a distinctive accumulation of mycorrhizae, in which fungi grow in and around the roots and provide the plant with nutrients. The Pyroloideae are mixotrophic and gain sugars from the mycorrhizae, as well as nutrients.", "title": "Distribution and ecology" }, { "paragraph_id": 9, "text": "The cultivation of blueberries, cranberries, and wintergreen for their fruit and oils relies especially on these unique relationships with fungi, as a healthy mycorrhizal network in the soil helps the plants to resist environmental stresses that might otherwise damage crop yield. Ericoid mycorrhizae are responsible for a high rate of uptake of nitrogen, which causes naturally low levels of free nitrogen in ericoid soils. These mycorrhizal fungi may also increase the tolerance of Ericaceae to heavy metals in soil, and may cause plants to grow faster by producing phytohormones.", "title": "Distribution and ecology" }, { "paragraph_id": 10, "text": "In many parts of the world, a \"heath\" or \"heathland\" is an environment characterised by an open dwarf-shrub community found on low-quality acidic soils, generally dominated by plants in Ericaceae. Heathlands are a broadly anthropogenic habitat, requiring regular grazing or burning to prevent succession. Heaths are particularly abundant - and constitute important cultural elements - in Norway, the United Kingdom, the Netherlands, Germany, Spain, Portugal, and other countries in Central and Western Europe. The most common examples of plants in Ericaceae which dominate heathlands are Calluna vulgaris, Erica cineria, Erica tetralix, and Vaccinium myrtillus.", "title": "Distribution and ecology" }, { "paragraph_id": 11, "text": "In heathland, plants in Ericaceae serve as host plants to the butterfly Plebejus argus. Other insects, such as Saturnia pavonia, Myrmeleotettix maculatus, Metrioptera brachyptera, and Picromerus bidens are closely associated with heathland environments. Reptiles thrive in heaths due to an abundance of sunlight and prey, and birds hunt the insects and reptiles which are present.", "title": "Distribution and ecology" }, { "paragraph_id": 12, "text": "Some evidence suggests eutrophic rainwater can convert ericoid heaths with species such as Erica tetralix to grasslands. Nitrogen is particularly suspect in this regard, and may be causing measurable changes to the distribution and abundance of some ericaceous species.", "title": "Distribution and ecology" } ]
The Ericaceae are a family of flowering plants, commonly known as the heath or heather family, found most commonly in acidic and infertile growing conditions. The family is large, with c. 4250 known species spread across 124 genera, making it the 14th most species-rich family of flowering plants. The many well known and economically important members of the Ericaceae include the cranberry, blueberry, huckleberry, rhododendron, and various common heaths and heathers.
2002-02-25T15:43:11Z
2023-12-14T00:37:03Z
[ "Template:Use dmy dates", "Template:Cite journal", "Template:Cite web", "Template:Citation", "Template:Taxonbar", "Template:Authority control", "Template:Short description", "Template:Automatic taxobox", "Template:Lang", "Template:Transliteration", "Template:Commons category", "Template:IPAc-en", "Template:Wikispecies", "Template:Webarchive", "Template:Angiosperm families", "Template:Distinguish", "Template:Nbsp", "Template:Sfnp", "Template:Main", "Template:Reflist", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Ericaceae
9,559
Electrical network
An electrical network is an interconnection of electrical components (e.g., batteries, resistors, inductors, capacitors, switches, transistors) or a model of such an interconnection, consisting of electrical elements (e.g., voltage sources, current sources, resistances, inductances, capacitances). An electrical circuit is a network consisting of a closed loop, giving a return path for the current. Thus all circuits are networks, but not all networks are circuits (although networks without a closed loop are often imprecisely referred to as "circuits"). Linear electrical networks, a special type consisting only of sources (voltage or current), linear lumped elements (resistors, capacitors, inductors), and linear distributed elements (transmission lines), have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, and transient response. A resistive network is a network containing only resistors and ideal current and voltage sources. Analysis of resistive networks is less complicated than analysis of networks containing capacitors and inductors. If the sources are constant (DC) sources, the result is a DC network. The effective resistance and current distribution properties of arbitrary resistor networks can be modeled in terms of their graph measures and geometrical properties. A network that contains active electronic components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools. An active network contains at least one voltage source or current source that can supply energy to the network indefinitely. A passive network does not contain an active source. An active network contains one or more sources of electromotive force. Practical examples of such sources include a battery or a generator. Active elements can inject power to the circuit, provide power gain, and control the current flow within the circuit. Passive networks do not contain any sources of electromotive force. They consist of passive elements like resistors and capacitors. A network is linear if its signals obey the principle of superposition; otherwise it is non-linear. Passive networks are generally taken to be linear, but there are exceptions. For instance, an inductor with an iron core can be driven into saturation if driven with a large enough current. In this region, the behaviour of the inductor is very non-linear. Discrete passive components (resistors, capacitors and inductors) are called lumped elements because all of their, respectively, resistance, capacitance and inductance is assumed to be located ("lumped") at one place. This design philosophy is called the lumped-element model and networks so designed are called lumped-element circuits. This is the conventional approach to circuit design. At high enough frequencies, or for long enough circuits (such as power transmission lines), the lumped assumption no longer holds because there is a significant fraction of a wavelength across the component dimensions. A new design model is needed for such cases called the distributed-element model. Networks designed to this model are called distributed-element circuits. A distributed-element circuit that includes some lumped components is called a semi-lumped design. An example of a semi-lumped circuit is the combline filter. Sources can be classified as independent sources and dependent sources. An ideal independent source maintains the same voltage or current regardless of the other elements present in the circuit. Its value is either constant (DC) or sinusoidal (AC). The strength of voltage or current is not changed by any variation in the connected network. Dependent sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is. A number of electrical laws apply to all linear resistive networks. These include: Applying these laws results in a set of simultaneous equations that can be solved either algebraically or numerically. The laws can generally be extended to networks containing reactances. They cannot be used in networks that contain nonlinear or time-varying components. To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Simple linear circuits can be analyzed by hand using complex number theory. In more complex cases the circuit may be analyzed with specialized computer programs or estimation techniques such as the piecewise-linear model. Circuit simulation software, such as HSPICE (an analog circuit simulator), and languages such as VHDL-AMS and verilog-AMS allow engineers to design circuits without the time, cost and risk of error involved in building circuit prototypes. More complex circuits can be analyzed numerically with software such as SPICE or GNUCAP, or symbolically using software such as SapWin. When faced with a new circuit, the software first tries to find a steady state solution, that is, one where all nodes conform to Kirchhoff's current law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element. Once the steady state solution is found, the operating points of each element in the circuit are known. For a small signal analysis, every non-linear element can be linearized around its operation point to obtain the small-signal estimate of the voltages and currents. This is an application of Ohm's Law. The resulting linear circuit matrix can be solved with Gaussian elimination. Software such as the PLECS interface to Simulink uses piecewise-linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time.
[ { "paragraph_id": 0, "text": "An electrical network is an interconnection of electrical components (e.g., batteries, resistors, inductors, capacitors, switches, transistors) or a model of such an interconnection, consisting of electrical elements (e.g., voltage sources, current sources, resistances, inductances, capacitances). An electrical circuit is a network consisting of a closed loop, giving a return path for the current. Thus all circuits are networks, but not all networks are circuits (although networks without a closed loop are often imprecisely referred to as \"circuits\"). Linear electrical networks, a special type consisting only of sources (voltage or current), linear lumped elements (resistors, capacitors, inductors), and linear distributed elements (transmission lines), have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, and transient response.", "title": "" }, { "paragraph_id": 1, "text": "A resistive network is a network containing only resistors and ideal current and voltage sources. Analysis of resistive networks is less complicated than analysis of networks containing capacitors and inductors. If the sources are constant (DC) sources, the result is a DC network. The effective resistance and current distribution properties of arbitrary resistor networks can be modeled in terms of their graph measures and geometrical properties.", "title": "" }, { "paragraph_id": 2, "text": "A network that contains active electronic components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools.", "title": "" }, { "paragraph_id": 3, "text": "An active network contains at least one voltage source or current source that can supply energy to the network indefinitely. A passive network does not contain an active source.", "title": "Classification" }, { "paragraph_id": 4, "text": "An active network contains one or more sources of electromotive force. Practical examples of such sources include a battery or a generator. Active elements can inject power to the circuit, provide power gain, and control the current flow within the circuit.", "title": "Classification" }, { "paragraph_id": 5, "text": "Passive networks do not contain any sources of electromotive force. They consist of passive elements like resistors and capacitors.", "title": "Classification" }, { "paragraph_id": 6, "text": "A network is linear if its signals obey the principle of superposition; otherwise it is non-linear. Passive networks are generally taken to be linear, but there are exceptions. For instance, an inductor with an iron core can be driven into saturation if driven with a large enough current. In this region, the behaviour of the inductor is very non-linear.", "title": "Classification" }, { "paragraph_id": 7, "text": "Discrete passive components (resistors, capacitors and inductors) are called lumped elements because all of their, respectively, resistance, capacitance and inductance is assumed to be located (\"lumped\") at one place. This design philosophy is called the lumped-element model and networks so designed are called lumped-element circuits. This is the conventional approach to circuit design. At high enough frequencies, or for long enough circuits (such as power transmission lines), the lumped assumption no longer holds because there is a significant fraction of a wavelength across the component dimensions. A new design model is needed for such cases called the distributed-element model. Networks designed to this model are called distributed-element circuits.", "title": "Classification" }, { "paragraph_id": 8, "text": "A distributed-element circuit that includes some lumped components is called a semi-lumped design. An example of a semi-lumped circuit is the combline filter.", "title": "Classification" }, { "paragraph_id": 9, "text": "Sources can be classified as independent sources and dependent sources.", "title": "Classification of sources" }, { "paragraph_id": 10, "text": "An ideal independent source maintains the same voltage or current regardless of the other elements present in the circuit. Its value is either constant (DC) or sinusoidal (AC). The strength of voltage or current is not changed by any variation in the connected network.", "title": "Classification of sources" }, { "paragraph_id": 11, "text": "Dependent sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is.", "title": "Classification of sources" }, { "paragraph_id": 12, "text": "A number of electrical laws apply to all linear resistive networks. These include:", "title": "Applying electrical laws" }, { "paragraph_id": 13, "text": "Applying these laws results in a set of simultaneous equations that can be solved either algebraically or numerically. The laws can generally be extended to networks containing reactances. They cannot be used in networks that contain nonlinear or time-varying components.", "title": "Applying electrical laws" }, { "paragraph_id": 14, "text": "To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Simple linear circuits can be analyzed by hand using complex number theory. In more complex cases the circuit may be analyzed with specialized computer programs or estimation techniques such as the piecewise-linear model.", "title": "Design methods" }, { "paragraph_id": 15, "text": "Circuit simulation software, such as HSPICE (an analog circuit simulator), and languages such as VHDL-AMS and verilog-AMS allow engineers to design circuits without the time, cost and risk of error involved in building circuit prototypes.", "title": "Design methods" }, { "paragraph_id": 16, "text": "More complex circuits can be analyzed numerically with software such as SPICE or GNUCAP, or symbolically using software such as SapWin.", "title": "Network simulation software" }, { "paragraph_id": 17, "text": "When faced with a new circuit, the software first tries to find a steady state solution, that is, one where all nodes conform to Kirchhoff's current law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element.", "title": "Network simulation software" }, { "paragraph_id": 18, "text": "Once the steady state solution is found, the operating points of each element in the circuit are known. For a small signal analysis, every non-linear element can be linearized around its operation point to obtain the small-signal estimate of the voltages and currents. This is an application of Ohm's Law. The resulting linear circuit matrix can be solved with Gaussian elimination.", "title": "Network simulation software" }, { "paragraph_id": 19, "text": "Software such as the PLECS interface to Simulink uses piecewise-linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time.", "title": "Network simulation software" } ]
An electrical network is an interconnection of electrical components or a model of such an interconnection, consisting of electrical elements. An electrical circuit is a network consisting of a closed loop, giving a return path for the current. Thus all circuits are networks, but not all networks are circuits. Linear electrical networks, a special type consisting only of sources, linear lumped elements, and linear distributed elements, have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, and transient response. A resistive network is a network containing only resistors and ideal current and voltage sources. Analysis of resistive networks is less complicated than analysis of networks containing capacitors and inductors. If the sources are constant (DC) sources, the result is a DC network. The effective resistance and current distribution properties of arbitrary resistor networks can be modeled in terms of their graph measures and geometrical properties. A network that contains active electronic components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools.
2001-10-15T01:09:11Z
2023-11-20T19:36:18Z
[ "Template:Short description", "Template:Electromagnetism", "Template:Network analysis navigation", "Template:Commons category", "Template:Wiktionary", "Template:Cite web", "Template:Portal bar", "Template:For", "Template:Refimprove", "Template:See also", "Template:Reflist", "Template:Cite journal", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Electrical_network
9,561
Euler (disambiguation)
Leonhard Euler (1707–1783) was a Swiss mathematician and physicist. Euler may also refer to:
[ { "paragraph_id": 0, "text": "Leonhard Euler (1707–1783) was a Swiss mathematician and physicist.", "title": "" }, { "paragraph_id": 1, "text": "Euler may also refer to:", "title": "" } ]
Leonhard Euler (1707–1783) was a Swiss mathematician and physicist. Euler may also refer to:
2022-03-14T03:38:34Z
[ "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Euler_(disambiguation)
9,566
Empty set
In mathematics, the empty set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, while in other theories, its existence can be deduced. Many possible properties of sets are vacuously true for the empty set. Any set other than the empty set is called non-empty. In some textbooks and popularizations, the empty set is referred to as the "null set". However, null set is a distinct notion within the context of measure theory, in which it describes a set of measure zero (which is not necessarily empty). The empty set may also be called the void set. Common notations for the empty set include "{ }", " ∅ {\displaystyle \emptyset } ", and "∅". The latter two symbols were introduced by the Bourbaki group (specifically André Weil) in 1939, inspired by the letter Ø in the Danish and Norwegian alphabets. In the past, "0" was occasionally used as a symbol for the empty set, but this is now considered to be an improper use of notation. The symbol ∅ is available at Unicode point U+2205. It can be coded in HTML as &empty; and as &#8709;. It can be coded in LaTeX as \varnothing. The symbol ∅ {\displaystyle \emptyset } is coded in LaTeX as \emptyset. When writing in languages such as Danish and Norwegian, where the empty set character may be confused with the alphabetic letter Ø (as when using the symbol in linguistics), the Unicode character U+29B0 REVERSED EMPTY SET ⦰ may be used instead. In standard axiomatic set theory, by the principle of extensionality, two sets are equal if they have the same elements (that is, neither of them has an element not in the other). As a result, there can be only one set with no elements, hence the usage of "the empty set" rather than "an empty set". The empty set has the following properties: For any set A: For any property P: Conversely, if for some property P and some set V, the following two statements hold: then V = ∅ . {\displaystyle V=\varnothing .} By the definition of subset, the empty set is a subset of any set A. That is, every element x of ∅ {\displaystyle \varnothing } belongs to A. Indeed, if it were not true that every element of ∅ {\displaystyle \varnothing } is in A, then there would be at least one element of ∅ {\displaystyle \varnothing } that is not present in A. Since there are no elements of ∅ {\displaystyle \varnothing } at all, there is no element of ∅ {\displaystyle \varnothing } that is not in A. Any statement that begins "for every element of ∅ {\displaystyle \varnothing } " is not making any substantive claim; it is a vacuous truth. This is often paraphrased as "everything is true of the elements of the empty set." In the usual set-theoretic definition of natural numbers, zero is modelled by the empty set. When speaking of the sum of the elements of a finite set, one is inevitably led to the convention that the sum of the elements of the empty set is zero. The reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set should be considered to be one (see empty product), since one is the identity element for multiplication. A derangement is a permutation of a set without fixed points. The empty set can be considered a derangement of itself, because it has only one permutation ( 0 ! = 1 {\displaystyle 0!=1} ), and it is vacuously true that no element (of the empty set) can be found that retains its original position. Since the empty set has no member when it is considered as a subset of any ordered set, every member of that set will be an upper bound and lower bound for the empty set. For example, when considered as a subset of the real numbers, with its usual ordering, represented by the real number line, every real number is both an upper and lower bound for the empty set. When considered as a subset of the extended reals formed by adding two "numbers" or "points" to the real numbers (namely negative infinity, denoted − ∞ , {\displaystyle -\infty \!\,,} which is defined to be less than every other extended real number, and positive infinity, denoted + ∞ , {\displaystyle +\infty \!\,,} which is defined to be greater than every other extended real number), we have that: and That is, the least upper bound (sup or supremum) of the empty set is negative infinity, while the greatest lower bound (inf or infimum) is positive infinity. By analogy with the above, in the domain of the extended reals, negative infinity is the identity element for the maximum and supremum operators, while positive infinity is the identity element for the minimum and infimum operators. In any topological space X, the empty set is open by definition, as is X. Since the complement of an open set is closed and the empty set and X are complements of each other, the empty set is also closed, making it a clopen set. Moreover, the empty set is compact by the fact that every finite set is compact. The closure of the empty set is empty. This is known as "preservation of nullary unions." If A {\displaystyle A} is a set, then there exists precisely one function f {\displaystyle f} from ∅ {\displaystyle \varnothing } to A , {\displaystyle A,} the empty function. As a result, the empty set is the unique initial object of the category of sets and functions. The empty set can be turned into a topological space, called the empty space, in just one way: by defining the empty set to be open. This empty topological space is the unique initial object in the category of topological spaces with continuous maps. In fact, it is a strict initial object: only the empty set has a function to the empty set. In the von Neumann construction of the ordinals, 0 is defined as the empty set, and the successor of an ordinal is defined as S ( α ) = α ∪ { α } {\displaystyle S(\alpha )=\alpha \cup \{\alpha \}} . Thus, we have 0 = ∅ {\displaystyle 0=\varnothing } , 1 = 0 ∪ { 0 } = { ∅ } {\displaystyle 1=0\cup \{0\}=\{\varnothing \}} , 2 = 1 ∪ { 1 } = { ∅ , { ∅ } } {\displaystyle 2=1\cup \{1\}=\{\varnothing ,\{\varnothing \}\}} , and so on. The von Neumann construction, along with the axiom of infinity, which guarantees the existence of at least one infinite set, can be used to construct the set of natural numbers, N 0 {\displaystyle \mathbb {N} _{0}} , such that the Peano axioms of arithmetic are satisfied. In the context of sets of real numbers, Cantor used P ≡ O {\displaystyle P\equiv O} to denote " P {\displaystyle P} contains no single point". This ≡ O {\displaystyle \equiv O} notation was utilized in definitions, for example Cantor defined two sets as being disjoint if their intersection has an absence of points, however it is debatable whether Cantor viewed O {\displaystyle O} as an existent set on its own, or if Cantor merely used ≡ O {\displaystyle \equiv O} as an emptiness predicate. Zermelo accepted O {\displaystyle O} itself as a set, but considered it an "improper set". In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways: While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians. The empty set is not the same thing as nothing; rather, it is a set with nothing inside it and a set is always something. This issue can be overcome by viewing a set as a bag—an empty bag undoubtedly still exists. Darling (2004) explains that the empty set is not nothing, but rather "the set of all triangles with four sides, the set of all numbers that are bigger than nine but smaller than eight, and the set of all opening moves in chess that involve a king." The popular syllogism is often used to demonstrate the philosophical relation between the concept of nothing and the empty set. Darling writes that the contrast can be seen by rewriting the statements "Nothing is better than eternal happiness" and "[A] ham sandwich is better than nothing" in a mathematical tone. According to Darling, the former is equivalent to "The set of all things that are better than eternal happiness is ∅ {\displaystyle \varnothing } " and the latter to "The set {ham sandwich} is better than the set ∅ {\displaystyle \varnothing } ". The first compares elements of sets, while the second compares the sets themselves. Jonathan Lowe argues that while the empty set it is also the case that: George Boolos argued that much of what has been heretofore obtained by set theory can just as easily be obtained by plural quantification over individuals, without reifying sets as singular entities having other entities as members.
[ { "paragraph_id": 0, "text": "In mathematics, the empty set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, while in other theories, its existence can be deduced. Many possible properties of sets are vacuously true for the empty set.", "title": "" }, { "paragraph_id": 1, "text": "Any set other than the empty set is called non-empty.", "title": "" }, { "paragraph_id": 2, "text": "In some textbooks and popularizations, the empty set is referred to as the \"null set\". However, null set is a distinct notion within the context of measure theory, in which it describes a set of measure zero (which is not necessarily empty). The empty set may also be called the void set.", "title": "" }, { "paragraph_id": 3, "text": "Common notations for the empty set include \"{ }\", \" ∅ {\\displaystyle \\emptyset } \", and \"∅\". The latter two symbols were introduced by the Bourbaki group (specifically André Weil) in 1939, inspired by the letter Ø in the Danish and Norwegian alphabets. In the past, \"0\" was occasionally used as a symbol for the empty set, but this is now considered to be an improper use of notation.", "title": "Notation" }, { "paragraph_id": 4, "text": "The symbol ∅ is available at Unicode point U+2205. It can be coded in HTML as &empty; and as &#8709;. It can be coded in LaTeX as \\varnothing. The symbol ∅ {\\displaystyle \\emptyset } is coded in LaTeX as \\emptyset.", "title": "Notation" }, { "paragraph_id": 5, "text": "When writing in languages such as Danish and Norwegian, where the empty set character may be confused with the alphabetic letter Ø (as when using the symbol in linguistics), the Unicode character U+29B0 REVERSED EMPTY SET ⦰ may be used instead.", "title": "Notation" }, { "paragraph_id": 6, "text": "In standard axiomatic set theory, by the principle of extensionality, two sets are equal if they have the same elements (that is, neither of them has an element not in the other). As a result, there can be only one set with no elements, hence the usage of \"the empty set\" rather than \"an empty set\".", "title": "Properties" }, { "paragraph_id": 7, "text": "The empty set has the following properties:", "title": "Properties" }, { "paragraph_id": 8, "text": "For any set A:", "title": "Properties" }, { "paragraph_id": 9, "text": "For any property P:", "title": "Properties" }, { "paragraph_id": 10, "text": "Conversely, if for some property P and some set V, the following two statements hold:", "title": "Properties" }, { "paragraph_id": 11, "text": "then V = ∅ . {\\displaystyle V=\\varnothing .}", "title": "Properties" }, { "paragraph_id": 12, "text": "By the definition of subset, the empty set is a subset of any set A. That is, every element x of ∅ {\\displaystyle \\varnothing } belongs to A. Indeed, if it were not true that every element of ∅ {\\displaystyle \\varnothing } is in A, then there would be at least one element of ∅ {\\displaystyle \\varnothing } that is not present in A. Since there are no elements of ∅ {\\displaystyle \\varnothing } at all, there is no element of ∅ {\\displaystyle \\varnothing } that is not in A. Any statement that begins \"for every element of ∅ {\\displaystyle \\varnothing } \" is not making any substantive claim; it is a vacuous truth. This is often paraphrased as \"everything is true of the elements of the empty set.\"", "title": "Properties" }, { "paragraph_id": 13, "text": "In the usual set-theoretic definition of natural numbers, zero is modelled by the empty set.", "title": "Properties" }, { "paragraph_id": 14, "text": "When speaking of the sum of the elements of a finite set, one is inevitably led to the convention that the sum of the elements of the empty set is zero. The reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set should be considered to be one (see empty product), since one is the identity element for multiplication.", "title": "Properties" }, { "paragraph_id": 15, "text": "A derangement is a permutation of a set without fixed points. The empty set can be considered a derangement of itself, because it has only one permutation ( 0 ! = 1 {\\displaystyle 0!=1} ), and it is vacuously true that no element (of the empty set) can be found that retains its original position.", "title": "Properties" }, { "paragraph_id": 16, "text": "Since the empty set has no member when it is considered as a subset of any ordered set, every member of that set will be an upper bound and lower bound for the empty set. For example, when considered as a subset of the real numbers, with its usual ordering, represented by the real number line, every real number is both an upper and lower bound for the empty set. When considered as a subset of the extended reals formed by adding two \"numbers\" or \"points\" to the real numbers (namely negative infinity, denoted − ∞ , {\\displaystyle -\\infty \\!\\,,} which is defined to be less than every other extended real number, and positive infinity, denoted + ∞ , {\\displaystyle +\\infty \\!\\,,} which is defined to be greater than every other extended real number), we have that:", "title": "In other areas of mathematics" }, { "paragraph_id": 17, "text": "and", "title": "In other areas of mathematics" }, { "paragraph_id": 18, "text": "That is, the least upper bound (sup or supremum) of the empty set is negative infinity, while the greatest lower bound (inf or infimum) is positive infinity. By analogy with the above, in the domain of the extended reals, negative infinity is the identity element for the maximum and supremum operators, while positive infinity is the identity element for the minimum and infimum operators.", "title": "In other areas of mathematics" }, { "paragraph_id": 19, "text": "In any topological space X, the empty set is open by definition, as is X. Since the complement of an open set is closed and the empty set and X are complements of each other, the empty set is also closed, making it a clopen set. Moreover, the empty set is compact by the fact that every finite set is compact.", "title": "In other areas of mathematics" }, { "paragraph_id": 20, "text": "The closure of the empty set is empty. This is known as \"preservation of nullary unions.\"", "title": "In other areas of mathematics" }, { "paragraph_id": 21, "text": "If A {\\displaystyle A} is a set, then there exists precisely one function f {\\displaystyle f} from ∅ {\\displaystyle \\varnothing } to A , {\\displaystyle A,} the empty function. As a result, the empty set is the unique initial object of the category of sets and functions.", "title": "In other areas of mathematics" }, { "paragraph_id": 22, "text": "The empty set can be turned into a topological space, called the empty space, in just one way: by defining the empty set to be open. This empty topological space is the unique initial object in the category of topological spaces with continuous maps. In fact, it is a strict initial object: only the empty set has a function to the empty set.", "title": "In other areas of mathematics" }, { "paragraph_id": 23, "text": "In the von Neumann construction of the ordinals, 0 is defined as the empty set, and the successor of an ordinal is defined as S ( α ) = α ∪ { α } {\\displaystyle S(\\alpha )=\\alpha \\cup \\{\\alpha \\}} . Thus, we have 0 = ∅ {\\displaystyle 0=\\varnothing } , 1 = 0 ∪ { 0 } = { ∅ } {\\displaystyle 1=0\\cup \\{0\\}=\\{\\varnothing \\}} , 2 = 1 ∪ { 1 } = { ∅ , { ∅ } } {\\displaystyle 2=1\\cup \\{1\\}=\\{\\varnothing ,\\{\\varnothing \\}\\}} , and so on. The von Neumann construction, along with the axiom of infinity, which guarantees the existence of at least one infinite set, can be used to construct the set of natural numbers, N 0 {\\displaystyle \\mathbb {N} _{0}} , such that the Peano axioms of arithmetic are satisfied.", "title": "In other areas of mathematics" }, { "paragraph_id": 24, "text": "In the context of sets of real numbers, Cantor used P ≡ O {\\displaystyle P\\equiv O} to denote \" P {\\displaystyle P} contains no single point\". This ≡ O {\\displaystyle \\equiv O} notation was utilized in definitions, for example Cantor defined two sets as being disjoint if their intersection has an absence of points, however it is debatable whether Cantor viewed O {\\displaystyle O} as an existent set on its own, or if Cantor merely used ≡ O {\\displaystyle \\equiv O} as an emptiness predicate. Zermelo accepted O {\\displaystyle O} itself as a set, but considered it an \"improper set\".", "title": "Questioned existence" }, { "paragraph_id": 25, "text": "In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways:", "title": "Questioned existence" }, { "paragraph_id": 26, "text": "While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians.", "title": "Questioned existence" }, { "paragraph_id": 27, "text": "The empty set is not the same thing as nothing; rather, it is a set with nothing inside it and a set is always something. This issue can be overcome by viewing a set as a bag—an empty bag undoubtedly still exists. Darling (2004) explains that the empty set is not nothing, but rather \"the set of all triangles with four sides, the set of all numbers that are bigger than nine but smaller than eight, and the set of all opening moves in chess that involve a king.\"", "title": "Questioned existence" }, { "paragraph_id": 28, "text": "The popular syllogism", "title": "Questioned existence" }, { "paragraph_id": 29, "text": "is often used to demonstrate the philosophical relation between the concept of nothing and the empty set. Darling writes that the contrast can be seen by rewriting the statements \"Nothing is better than eternal happiness\" and \"[A] ham sandwich is better than nothing\" in a mathematical tone. According to Darling, the former is equivalent to \"The set of all things that are better than eternal happiness is ∅ {\\displaystyle \\varnothing } \" and the latter to \"The set {ham sandwich} is better than the set ∅ {\\displaystyle \\varnothing } \". The first compares elements of sets, while the second compares the sets themselves.", "title": "Questioned existence" }, { "paragraph_id": 30, "text": "Jonathan Lowe argues that while the empty set", "title": "Questioned existence" }, { "paragraph_id": 31, "text": "it is also the case that:", "title": "Questioned existence" }, { "paragraph_id": 32, "text": "George Boolos argued that much of what has been heretofore obtained by set theory can just as easily be obtained by plural quantification over individuals, without reifying sets as singular entities having other entities as members.", "title": "Questioned existence" } ]
In mathematics, the empty set is the unique set having no elements; its size or cardinality is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, while in other theories, its existence can be deduced. Many possible properties of sets are vacuously true for the empty set. Any set other than the empty set is called non-empty. In some textbooks and popularizations, the empty set is referred to as the "null set". However, null set is a distinct notion within the context of measure theory, in which it describes a set of measure zero. The empty set may also be called the void set.
2001-07-13T19:21:14Z
2023-12-30T17:13:44Z
[ "Template:Reflist", "Template:Mathematical logic", "Template:Redirect", "Template:Cite book", "Template:Citation", "Template:ISBN", "Template:Set theory", "Template:Other uses of", "Template:Em", "Template:Annotated link", "Template:Citation needed", "Template:Cite web", "Template:MathWorld", "Template:Short description", "Template:Main", "Template:Code" ]
https://en.wikipedia.org/wiki/Empty_set
9,567
Egoism
Egoism is a philosophy concerned with the role of the self, or ego, as the motivation and goal of one's own action. Different theories of egoism encompass a range of disparate ideas and can generally be categorized into descriptive or normative forms. That is, they may be interested in either describing that people do act in self-interest or prescribing that they should. Other definitions of egoism may instead emphasise action according to one's will rather than one's self-interest, and furthermore posit that this is a truer sense of egoism. The New Catholic Encyclopedia states of egoism that it "incorporates in itself certain basic truths: it is natural for man to love himself; he should moreover do so, since each one is ultimately responsible for himself; pleasure, the development of one's potentialities, and the acquisition of power are normally desirable." The moral censure of self-interest is a common subject of critique in egoist philosophy, with such judgments being examined as means of control and the result of power relations. Egoism may also reject that insight into one's internal motivation can arrive extrinsically, such as from psychology or sociology, though, for example, this is not present in the philosophy of Friedrich Nietzsche. The term egoism is derived from the French égoïsme, from the Latin ego (first person singular personal pronoun; "I") with the French -ïsme ("-ism"). The descriptive variants of egoism are concerned with self-regard as a factual description of human motivation and, in its furthest application, that all human motivation stems from the desires and interest of the ego. In these theories, action which is self-regarding may be simply termed egoistic. The position that people tend to act in their own self-interest is called default egoism, whereas psychological egoism is the position that all motivations are rooted in an ultimately self-serving psyche. That is, in its strong form, that even seemingly altruistic actions are only disguised as such and are always self-serving. Its weaker form instead holds that, even if altruistic motivation is possible, the willed action necessarily becomes egoistic in serving one's own will. Also interesting is "autoism" as in "autistic psychopathy". In contrast to this and philosophical egoism, biological egoism (also called evolutionary egoism) describes motivations rooted solely in reproductive self-interest (i.e. reproductive fitness). Furthermore, selfish gene theory holds that it is the self-interest of genetic information that conditions human behaviour. Theories which hold egoism to be normative stipulate that the ego ought to promote its own interests above other values. Where this ought is held to be a pragmatic judgment it is termed rational egoism and where it is held to be a moral judgment it is termed ethical egoism. The Stanford Encyclopedia of Philosophy states that "ethical egoism might also apply to things other than acts, such as rules or character traits" but that such variants are uncommon. Furthermore, conditional egoism is a consequentialist form of ethical egoism which holds that egoism is morally right if it leads to morally acceptable ends. John F. Welsh, in his work Max Stirner's Dialectical Egoism: A New Interpretation, coins the term dialectical egoism to describe an interpretation of the egoist philosophy of Max Stirner as being fundamentally dialectical. Normative egoism, as in the case of Stirner, need not reject that some modes of behavior are to be valued above others—such as Stirner's affirmation that non-restriction and autonomy are to be most highly valued. Contrary theories, however, may just as easily favour egoistic domination of others. Stirner's egoism argues that individuals are impossible to fully comprehend, as no understanding of the self can adequately describe the fullness of experience. Stirner has been broadly understood as containing traits of both psychological egoism and rational egoism. Unlike the self-interest described by Ayn Rand, Stirner did not address individual self-interest, selfishness, or prescriptions for how one should act. He urged individuals to decide for themselves and fulfill their own egoism. The philosophy of Friedrich Nietzsche has been linked to forms of both descriptive and normative egoism. Nietzsche, in attacking the widely held moral abhorrence for egoistic action, seeks to free higher human beings from their belief that this morality is good for them. He rejects Christian and Kantian ethics as merely the disguised egoism of slave morality. In his On the Genealogy of Morals, Friedrich Nietzsche traces the origins of master–slave morality to fundamentally egoistic value judgments. In the aristocratic valuation, excellence and virtue come as a form of superiority over the common masses, which the priestly valuation, in ressentiment of power, seeks to invert—where the powerless and pitiable become the moral ideal. This upholding of unegoistic actions is therefore seen as stemming from a desire to reject the superiority or excellency of others. He holds that all normative systems which operate in the role often associated with morality favor the interests of some people, often, though not necessarily, at the expense of others. Nevertheless, Nietzsche also states in the same book that there is no 'doer' of any acts, be they selfish or not: Jonas Monte of Brigham Young University argues that Nietzsche doubted if any 'I' existed in the first place, which the former defined as "a conscious Ego who commands mental states". In 1851, French philosopher Auguste Comte coined the term altruism (French: altruisme; from Italian altrui, from Latin alteri 'others') as an antonym for egoism. In this sense, altruism defined Comte's position that all self-regard must be replaced with only the regard for others. While Friedrich Nietzsche does not view altruism as a suitable antonym for egoism, Comte instead states that only two human motivations exist, egoistic and altruistic, and that the two cannot be mediated; that is, one must always predominate the other. For Comte, the total subordination of the self to altruism is a necessary condition to both social and personal benefit. Nietzsche, rather than rejecting the practice of altruism, warns that despite there being neither much altruism nor equality in the world, there is almost universal endorsement of their value and, notoriously, even by those who are its worst enemies in practice. Egoist philosophy commonly views the subordination of the self to altruism as either a form of domination that limits freedom, an unethical or irrational principle, or an extension of some egoistic root cause. In evolutionary theory, biological altruism is the observed occurrence of an organism acting to the benefit of others at the cost of its own reproductive fitness. While biological egoism does grant that an organism may act to the benefit of others, it describes only such when in accordance with reproductive self-interest. Kin altruism and selfish gene theory are examples of this division. On biological altruism, the Stanford Encyclopedia of Philosophy states: "Contrary to what is often thought, an evolutionary approach to human behaviour does not imply that humans are likely to be motivated by self-interest alone. One strategy by which ‘selfish genes’ may increase their future representation is by causing humans to be non-selfish, in the psychological sense." This is a central topic within contemporary discourse of psychological egoism. The history of egoist thought has often overlapped with that of nihilism. For example, Max Stirner's rejection of absolutes and abstract concepts often places him among the first philosophical nihilists. The popular description of Stirner as a moral nihilist, however, may fail to encapsulate certain subtleties of his ethical thought. The Stanford Encyclopedia of Philosophy states, "Stirner is clearly committed to the non-nihilistic view that certain kinds of character and modes of behaviour (namely autonomous individuals and actions) are to be valued above all others. His conception of morality is, in this respect, a narrow one, and his rejection of the legitimacy of moral claims is not to be confused with a denial of the propriety of all normative or ethical judgement." Stirner's nihilism may instead be understood as cosmic nihilism. Likewise, both normative and descriptive theories of egoism further developed under Russian nihilism, shortly giving birth to rational egoism. Nihilist philosophers Dmitry Pisarev and Nikolay Chernyshevsky were influential in this regard, compounding such forms of egoism with hard determinism. Max Stirner's philosophy strongly rejects modernity and is highly critical of the increasing dogmatism and oppressive social institutions that embody it. In order that it might be surpassed, egoist principles are upheld as a necessary advancement beyond the modern world. The Stanford Encyclopedia states that Stirner's historical analyses serve to "undermine historical narratives which portray the modern development of humankind as the progressive realisation of freedom, but also to support an account of individuals in the modern world as increasingly oppressed". This critique of humanist discourses especially has linked Stirner to more contemporary poststructuralist thought. Since normative egoism rejects the moral obligation to subordinate the ego to society-at-large or a ruling class, it may be predisposed to certain political implications. The Internet Encyclopedia of Philosophy states: Egoists ironically can be read as moral and political egalitarians glorifying the dignity of each and every person to pursue life as they see fit. Mistakes in securing the proper means and appropriate ends will be made by individuals, but if they are morally responsible for their actions they not only will bear the consequences but also the opportunity for adapting and learning. In contrast with this however, such an ethic may not morally obligate against the egoistic exercise of power over others. On these grounds, Friedrich Nietzsche criticizes egalitarian morality and political projects as unconducive to the development of human excellence. Max Stirner's own conception, the union of egoists as detailed in his work The Ego and Its Own, saw a proposed form of societal relations whereby limitations on egoistic action are rejected. When posthumously adopted by the anarchist movement, this became the foundation for egoist anarchism. Stirner's variant of property theory is similarly dialectical, where the concept of ownership is only that personal distinction made between what is one's property and what is not. Consequentially, it is the exercise of control over property which constitutes the nonabstract possession of it. In contrast to this, Ayn Rand incorporates capitalist property rights into her egoist theory. Egoist philosopher Nikolai Gavrilovich Chernyshevskii was the dominant intellectual figure behind the 1860–1917 revolutionary movement in Russia, which resulted in the assassination of Tsar Alexander II eight years before his death in 1889. Dmitry Pisarev was a similarly radical influence within the movement, though he did not personally advocate political revolution. Philosophical egoism has also found wide appeal among anarchist revolutionaries and thinkers, such as John Henry Mackay, Benjamin Tucker, Émile Armand, Han Ryner Gérard de Lacaze-Duthiers, Renzo Novatore, Miguel Giménez Igualada, and Lev Chernyi. Though he did not involve in any revolutionary movements himself, the entire school of individualist anarchism owes much of its intellectual heritage to Max Stirner. Egoist philosophy may be misrepresented as a principally revolutionary field of thought. However, neither Hobbesian nor Nietzschean theories of egoism approve of political revolution. Anarchism and revolutionary socialism were also strongly rejected by Ayn Rand and her followers. The philosophies of both Nietzsche and Stirner were heavily appropriated by fascist and proto-fascist ideologies. Nietzsche in particular has infamously been represented as a predecessor to Nazism and a substantial academic effort was necessary to disassociate his ideas from their aforementioned appropriation. At first sight, Nazi totalitarianism may seem the opposite of Stirner's radical individualism. But fascism was above all an attempt to dissolve the social ties created by history and replace them by artificial bonds among individuals who were expected to render explicit obedience to the state on grounds of absolute egoism. Fascist education combined the tenets of asocial egoism and unquestioning conformism, the latter being the means by which the individual secured his own niche in the system. Stirner's philosophy has nothing to say against conformism, it only objects to the Ego being subordinated to any higher principle: the egoist is free to adjust to the world if it is clear he will better himself by doing so. His 'rebellion' may take the form of utter servility if it will further his interest; what he must not do is to be bound by 'general' values or myths of humanity. The totalitarian ideal of a barrack-like society from which all real, historical ties have been eliminated is perfectly consistent with Stirner's principles: the egoist, by his very nature, must be prepared to fight under any flag that suits his convenience.
[ { "paragraph_id": 0, "text": "Egoism is a philosophy concerned with the role of the self, or ego, as the motivation and goal of one's own action. Different theories of egoism encompass a range of disparate ideas and can generally be categorized into descriptive or normative forms. That is, they may be interested in either describing that people do act in self-interest or prescribing that they should. Other definitions of egoism may instead emphasise action according to one's will rather than one's self-interest, and furthermore posit that this is a truer sense of egoism.", "title": "" }, { "paragraph_id": 1, "text": "The New Catholic Encyclopedia states of egoism that it \"incorporates in itself certain basic truths: it is natural for man to love himself; he should moreover do so, since each one is ultimately responsible for himself; pleasure, the development of one's potentialities, and the acquisition of power are normally desirable.\" The moral censure of self-interest is a common subject of critique in egoist philosophy, with such judgments being examined as means of control and the result of power relations. Egoism may also reject that insight into one's internal motivation can arrive extrinsically, such as from psychology or sociology, though, for example, this is not present in the philosophy of Friedrich Nietzsche.", "title": "" }, { "paragraph_id": 2, "text": "The term egoism is derived from the French égoïsme, from the Latin ego (first person singular personal pronoun; \"I\") with the French -ïsme (\"-ism\").", "title": "Overview" }, { "paragraph_id": 3, "text": "The descriptive variants of egoism are concerned with self-regard as a factual description of human motivation and, in its furthest application, that all human motivation stems from the desires and interest of the ego. In these theories, action which is self-regarding may be simply termed egoistic.", "title": "Overview" }, { "paragraph_id": 4, "text": "The position that people tend to act in their own self-interest is called default egoism, whereas psychological egoism is the position that all motivations are rooted in an ultimately self-serving psyche. That is, in its strong form, that even seemingly altruistic actions are only disguised as such and are always self-serving. Its weaker form instead holds that, even if altruistic motivation is possible, the willed action necessarily becomes egoistic in serving one's own will. Also interesting is \"autoism\" as in \"autistic psychopathy\". In contrast to this and philosophical egoism, biological egoism (also called evolutionary egoism) describes motivations rooted solely in reproductive self-interest (i.e. reproductive fitness). Furthermore, selfish gene theory holds that it is the self-interest of genetic information that conditions human behaviour.", "title": "Overview" }, { "paragraph_id": 5, "text": "Theories which hold egoism to be normative stipulate that the ego ought to promote its own interests above other values. Where this ought is held to be a pragmatic judgment it is termed rational egoism and where it is held to be a moral judgment it is termed ethical egoism. The Stanford Encyclopedia of Philosophy states that \"ethical egoism might also apply to things other than acts, such as rules or character traits\" but that such variants are uncommon. Furthermore, conditional egoism is a consequentialist form of ethical egoism which holds that egoism is morally right if it leads to morally acceptable ends. John F. Welsh, in his work Max Stirner's Dialectical Egoism: A New Interpretation, coins the term dialectical egoism to describe an interpretation of the egoist philosophy of Max Stirner as being fundamentally dialectical.", "title": "Overview" }, { "paragraph_id": 6, "text": "Normative egoism, as in the case of Stirner, need not reject that some modes of behavior are to be valued above others—such as Stirner's affirmation that non-restriction and autonomy are to be most highly valued. Contrary theories, however, may just as easily favour egoistic domination of others.", "title": "Overview" }, { "paragraph_id": 7, "text": "Stirner's egoism argues that individuals are impossible to fully comprehend, as no understanding of the self can adequately describe the fullness of experience. Stirner has been broadly understood as containing traits of both psychological egoism and rational egoism. Unlike the self-interest described by Ayn Rand, Stirner did not address individual self-interest, selfishness, or prescriptions for how one should act. He urged individuals to decide for themselves and fulfill their own egoism.", "title": "Theoreticians" }, { "paragraph_id": 8, "text": "The philosophy of Friedrich Nietzsche has been linked to forms of both descriptive and normative egoism. Nietzsche, in attacking the widely held moral abhorrence for egoistic action, seeks to free higher human beings from their belief that this morality is good for them. He rejects Christian and Kantian ethics as merely the disguised egoism of slave morality.", "title": "Theoreticians" }, { "paragraph_id": 9, "text": "In his On the Genealogy of Morals, Friedrich Nietzsche traces the origins of master–slave morality to fundamentally egoistic value judgments. In the aristocratic valuation, excellence and virtue come as a form of superiority over the common masses, which the priestly valuation, in ressentiment of power, seeks to invert—where the powerless and pitiable become the moral ideal. This upholding of unegoistic actions is therefore seen as stemming from a desire to reject the superiority or excellency of others. He holds that all normative systems which operate in the role often associated with morality favor the interests of some people, often, though not necessarily, at the expense of others.", "title": "Theoreticians" }, { "paragraph_id": 10, "text": "Nevertheless, Nietzsche also states in the same book that there is no 'doer' of any acts, be they selfish or not:", "title": "Theoreticians" }, { "paragraph_id": 11, "text": "Jonas Monte of Brigham Young University argues that Nietzsche doubted if any 'I' existed in the first place, which the former defined as \"a conscious Ego who commands mental states\".", "title": "Theoreticians" }, { "paragraph_id": 12, "text": "In 1851, French philosopher Auguste Comte coined the term altruism (French: altruisme; from Italian altrui, from Latin alteri 'others') as an antonym for egoism. In this sense, altruism defined Comte's position that all self-regard must be replaced with only the regard for others.", "title": "Relation to altruism" }, { "paragraph_id": 13, "text": "While Friedrich Nietzsche does not view altruism as a suitable antonym for egoism, Comte instead states that only two human motivations exist, egoistic and altruistic, and that the two cannot be mediated; that is, one must always predominate the other. For Comte, the total subordination of the self to altruism is a necessary condition to both social and personal benefit. Nietzsche, rather than rejecting the practice of altruism, warns that despite there being neither much altruism nor equality in the world, there is almost universal endorsement of their value and, notoriously, even by those who are its worst enemies in practice. Egoist philosophy commonly views the subordination of the self to altruism as either a form of domination that limits freedom, an unethical or irrational principle, or an extension of some egoistic root cause.", "title": "Relation to altruism" }, { "paragraph_id": 14, "text": "In evolutionary theory, biological altruism is the observed occurrence of an organism acting to the benefit of others at the cost of its own reproductive fitness. While biological egoism does grant that an organism may act to the benefit of others, it describes only such when in accordance with reproductive self-interest. Kin altruism and selfish gene theory are examples of this division. On biological altruism, the Stanford Encyclopedia of Philosophy states: \"Contrary to what is often thought, an evolutionary approach to human behaviour does not imply that humans are likely to be motivated by self-interest alone. One strategy by which ‘selfish genes’ may increase their future representation is by causing humans to be non-selfish, in the psychological sense.\" This is a central topic within contemporary discourse of psychological egoism.", "title": "Relation to altruism" }, { "paragraph_id": 15, "text": "The history of egoist thought has often overlapped with that of nihilism. For example, Max Stirner's rejection of absolutes and abstract concepts often places him among the first philosophical nihilists. The popular description of Stirner as a moral nihilist, however, may fail to encapsulate certain subtleties of his ethical thought. The Stanford Encyclopedia of Philosophy states, \"Stirner is clearly committed to the non-nihilistic view that certain kinds of character and modes of behaviour (namely autonomous individuals and actions) are to be valued above all others. His conception of morality is, in this respect, a narrow one, and his rejection of the legitimacy of moral claims is not to be confused with a denial of the propriety of all normative or ethical judgement.\" Stirner's nihilism may instead be understood as cosmic nihilism. Likewise, both normative and descriptive theories of egoism further developed under Russian nihilism, shortly giving birth to rational egoism. Nihilist philosophers Dmitry Pisarev and Nikolay Chernyshevsky were influential in this regard, compounding such forms of egoism with hard determinism.", "title": "Relation to nihilism" }, { "paragraph_id": 16, "text": "Max Stirner's philosophy strongly rejects modernity and is highly critical of the increasing dogmatism and oppressive social institutions that embody it. In order that it might be surpassed, egoist principles are upheld as a necessary advancement beyond the modern world. The Stanford Encyclopedia states that Stirner's historical analyses serve to \"undermine historical narratives which portray the modern development of humankind as the progressive realisation of freedom, but also to support an account of individuals in the modern world as increasingly oppressed\". This critique of humanist discourses especially has linked Stirner to more contemporary poststructuralist thought.", "title": "Relation to nihilism" }, { "paragraph_id": 17, "text": "Since normative egoism rejects the moral obligation to subordinate the ego to society-at-large or a ruling class, it may be predisposed to certain political implications. The Internet Encyclopedia of Philosophy states:", "title": "Political egoism" }, { "paragraph_id": 18, "text": "Egoists ironically can be read as moral and political egalitarians glorifying the dignity of each and every person to pursue life as they see fit. Mistakes in securing the proper means and appropriate ends will be made by individuals, but if they are morally responsible for their actions they not only will bear the consequences but also the opportunity for adapting and learning.", "title": "Political egoism" }, { "paragraph_id": 19, "text": "In contrast with this however, such an ethic may not morally obligate against the egoistic exercise of power over others. On these grounds, Friedrich Nietzsche criticizes egalitarian morality and political projects as unconducive to the development of human excellence. Max Stirner's own conception, the union of egoists as detailed in his work The Ego and Its Own, saw a proposed form of societal relations whereby limitations on egoistic action are rejected. When posthumously adopted by the anarchist movement, this became the foundation for egoist anarchism.", "title": "Political egoism" }, { "paragraph_id": 20, "text": "Stirner's variant of property theory is similarly dialectical, where the concept of ownership is only that personal distinction made between what is one's property and what is not. Consequentially, it is the exercise of control over property which constitutes the nonabstract possession of it. In contrast to this, Ayn Rand incorporates capitalist property rights into her egoist theory.", "title": "Political egoism" }, { "paragraph_id": 21, "text": "Egoist philosopher Nikolai Gavrilovich Chernyshevskii was the dominant intellectual figure behind the 1860–1917 revolutionary movement in Russia, which resulted in the assassination of Tsar Alexander II eight years before his death in 1889. Dmitry Pisarev was a similarly radical influence within the movement, though he did not personally advocate political revolution.", "title": "Political egoism" }, { "paragraph_id": 22, "text": "Philosophical egoism has also found wide appeal among anarchist revolutionaries and thinkers, such as John Henry Mackay, Benjamin Tucker, Émile Armand, Han Ryner Gérard de Lacaze-Duthiers, Renzo Novatore, Miguel Giménez Igualada, and Lev Chernyi. Though he did not involve in any revolutionary movements himself, the entire school of individualist anarchism owes much of its intellectual heritage to Max Stirner.", "title": "Political egoism" }, { "paragraph_id": 23, "text": "Egoist philosophy may be misrepresented as a principally revolutionary field of thought. However, neither Hobbesian nor Nietzschean theories of egoism approve of political revolution. Anarchism and revolutionary socialism were also strongly rejected by Ayn Rand and her followers.", "title": "Political egoism" }, { "paragraph_id": 24, "text": "The philosophies of both Nietzsche and Stirner were heavily appropriated by fascist and proto-fascist ideologies. Nietzsche in particular has infamously been represented as a predecessor to Nazism and a substantial academic effort was necessary to disassociate his ideas from their aforementioned appropriation.", "title": "Political egoism" }, { "paragraph_id": 25, "text": "At first sight, Nazi totalitarianism may seem the opposite of Stirner's radical individualism. But fascism was above all an attempt to dissolve the social ties created by history and replace them by artificial bonds among individuals who were expected to render explicit obedience to the state on grounds of absolute egoism. Fascist education combined the tenets of asocial egoism and unquestioning conformism, the latter being the means by which the individual secured his own niche in the system. Stirner's philosophy has nothing to say against conformism, it only objects to the Ego being subordinated to any higher principle: the egoist is free to adjust to the world if it is clear he will better himself by doing so. His 'rebellion' may take the form of utter servility if it will further his interest; what he must not do is to be bound by 'general' values or myths of humanity. The totalitarian ideal of a barrack-like society from which all real, historical ties have been eliminated is perfectly consistent with Stirner's principles: the egoist, by his very nature, must be prepared to fight under any flag that suits his convenience.", "title": "Political egoism" } ]
Egoism is a philosophy concerned with the role of the self, or ego, as the motivation and goal of one's own action. Different theories of egoism encompass a range of disparate ideas and can generally be categorized into descriptive or normative forms. That is, they may be interested in either describing that people do act in self-interest or prescribing that they should. Other definitions of egoism may instead emphasise action according to one's will rather than one's self-interest, and furthermore posit that this is a truer sense of egoism. The New Catholic Encyclopedia states of egoism that it "incorporates in itself certain basic truths: it is natural for man to love himself; he should moreover do so, since each one is ultimately responsible for himself; pleasure, the development of one's potentialities, and the acquisition of power are normally desirable." The moral censure of self-interest is a common subject of critique in egoist philosophy, with such judgments being examined as means of control and the result of power relations. Egoism may also reject that insight into one's internal motivation can arrive extrinsically, such as from psychology or sociology, though, for example, this is not present in the philosophy of Friedrich Nietzsche.
2001-07-15T05:18:43Z
2023-12-25T14:41:56Z
[ "Template:Use British English", "Template:Lang-fr", "Template:Etymology", "Template:Reflist", "Template:Cite web", "Template:About-distinguish", "Template:Redirect", "Template:Anl", "Template:Cite dictionary", "Template:Use mdy dates", "Template:Expand section", "Template:Further", "Template:Blockquote", "Template:Npsn", "Template:Excerpt", "Template:Wikt-lang", "Template:Expand list", "Template:Nihilism", "Template:For", "Template:Linktext", "Template:Authority control", "Template:Individualism sidebar", "Template:Quote", "Template:Short description", "Template:Related", "Template:Cite journal", "Template:Cite book", "Template:Wiktionary", "Template:Cite encyclopedia", "Template:Quote frame" ]
https://en.wikipedia.org/wiki/Egoism
9,569
Endomorphism
In mathematics, an endomorphism is a morphism from a mathematical object to itself. An endomorphism that is also an isomorphism is an automorphism. For example, an endomorphism of a vector space V is a linear map f: V → V, and an endomorphism of a group G is a group homomorphism f: G → G. In general, we can talk about endomorphisms in any category. In the category of sets, endomorphisms are functions from a set S to itself. In any category, the composition of any two endomorphisms of X is again an endomorphism of X. It follows that the set of all endomorphisms of X forms a monoid, the full transformation monoid, and denoted End(X) (or EndC(X) to emphasize the category C). An invertible endomorphism of X is called an automorphism. The set of all automorphisms is a subset of End(X) with a group structure, called the automorphism group of X and denoted Aut(X). In the following diagram, the arrows denote implication: Any two endomorphisms of an abelian group, A, can be added together by the rule (f + g)(a) = f(a) + g(a). Under this addition, and with multiplication being defined as function composition, the endomorphisms of an abelian group form a ring (the endomorphism ring). For example, the set of endomorphisms of Z n {\displaystyle \mathbb {Z} ^{n}} is the ring of all n × n matrices with integer entries. The endomorphisms of a vector space or module also form a ring, as do the endomorphisms of any object in a preadditive category. The endomorphisms of a nonabelian group generate an algebraic structure known as a near-ring. Every ring with one is the endomorphism ring of its regular module, and so is a subring of an endomorphism ring of an abelian group; however there are rings that are not the endomorphism ring of any abelian group. In any concrete category, especially for vector spaces, endomorphisms are maps from a set into itself, and may be interpreted as unary operators on that set, acting on the elements, and allowing the notion of element orbits to be defined, etc. Depending on the additional structure defined for the category at hand (topology, metric, ...), such operators can have properties like continuity, boundedness, and so on. More details should be found in the article about operator theory. An endofunction is a function whose domain is equal to its codomain. A homomorphic endofunction is an endomorphism. Let S be an arbitrary set. Among endofunctions on S one finds permutations of S and constant functions associating to every x in S the same element c in S. Every permutation of S has the codomain equal to its domain and is bijective and invertible. If S has more than one element, a constant function on S has an image that is a proper subset of its codomain, and thus is not bijective (and hence not invertible). The function associating to each natural number n the floor of n/2 has its image equal to its codomain and is not invertible. Finite endofunctions are equivalent to directed pseudoforests. For sets of size n there are n endofunctions on the set. Particular examples of bijective endofunctions are the involutions; i.e., the functions coinciding with their inverses.
[ { "paragraph_id": 0, "text": "In mathematics, an endomorphism is a morphism from a mathematical object to itself. An endomorphism that is also an isomorphism is an automorphism. For example, an endomorphism of a vector space V is a linear map f: V → V, and an endomorphism of a group G is a group homomorphism f: G → G. In general, we can talk about endomorphisms in any category. In the category of sets, endomorphisms are functions from a set S to itself.", "title": "" }, { "paragraph_id": 1, "text": "In any category, the composition of any two endomorphisms of X is again an endomorphism of X. It follows that the set of all endomorphisms of X forms a monoid, the full transformation monoid, and denoted End(X) (or EndC(X) to emphasize the category C).", "title": "" }, { "paragraph_id": 2, "text": "An invertible endomorphism of X is called an automorphism. The set of all automorphisms is a subset of End(X) with a group structure, called the automorphism group of X and denoted Aut(X). In the following diagram, the arrows denote implication:", "title": "Automorphisms" }, { "paragraph_id": 3, "text": "Any two endomorphisms of an abelian group, A, can be added together by the rule (f + g)(a) = f(a) + g(a). Under this addition, and with multiplication being defined as function composition, the endomorphisms of an abelian group form a ring (the endomorphism ring). For example, the set of endomorphisms of Z n {\\displaystyle \\mathbb {Z} ^{n}} is the ring of all n × n matrices with integer entries. The endomorphisms of a vector space or module also form a ring, as do the endomorphisms of any object in a preadditive category. The endomorphisms of a nonabelian group generate an algebraic structure known as a near-ring. Every ring with one is the endomorphism ring of its regular module, and so is a subring of an endomorphism ring of an abelian group; however there are rings that are not the endomorphism ring of any abelian group.", "title": "Endomorphism rings" }, { "paragraph_id": 4, "text": "In any concrete category, especially for vector spaces, endomorphisms are maps from a set into itself, and may be interpreted as unary operators on that set, acting on the elements, and allowing the notion of element orbits to be defined, etc.", "title": "Operator theory" }, { "paragraph_id": 5, "text": "Depending on the additional structure defined for the category at hand (topology, metric, ...), such operators can have properties like continuity, boundedness, and so on. More details should be found in the article about operator theory.", "title": "Operator theory" }, { "paragraph_id": 6, "text": "An endofunction is a function whose domain is equal to its codomain. A homomorphic endofunction is an endomorphism.", "title": "Endofunctions" }, { "paragraph_id": 7, "text": "Let S be an arbitrary set. Among endofunctions on S one finds permutations of S and constant functions associating to every x in S the same element c in S. Every permutation of S has the codomain equal to its domain and is bijective and invertible. If S has more than one element, a constant function on S has an image that is a proper subset of its codomain, and thus is not bijective (and hence not invertible). The function associating to each natural number n the floor of n/2 has its image equal to its codomain and is not invertible.", "title": "Endofunctions" }, { "paragraph_id": 8, "text": "Finite endofunctions are equivalent to directed pseudoforests. For sets of size n there are n endofunctions on the set.", "title": "Endofunctions" }, { "paragraph_id": 9, "text": "Particular examples of bijective endofunctions are the involutions; i.e., the functions coinciding with their inverses.", "title": "Endofunctions" } ]
In mathematics, an endomorphism is a morphism from a mathematical object to itself. An endomorphism that is also an isomorphism is an automorphism. For example, an endomorphism of a vector space V is a linear map f: V → V, and an endomorphism of a group G is a group homomorphism f: G → G. In general, we can talk about endomorphisms in any category. In the category of sets, endomorphisms are functions from a set S to itself. In any category, the composition of any two endomorphisms of X is again an endomorphism of X. It follows that the set of all endomorphisms of X forms a monoid, the full transformation monoid, and denoted End(X).
2001-01-28T20:49:02Z
2023-07-31T17:34:29Z
[ "Template:Springer", "Template:Short description", "Template:Redirect", "Template:Math", "Template:Main", "Template:Citation" ]
https://en.wikipedia.org/wiki/Endomorphism
9,574
Eric Hoffer
Eric Hoffer (July 25, 1902 – May 21, 1983) was an American moral and social philosopher. He was the author of ten books and was awarded the Presidential Medal of Freedom in February 1983. His first book, The True Believer (1951), was widely recognized as a classic, receiving critical acclaim from both scholars and laymen, although Hoffer believed that The Ordeal of Change (1963) was his finest work. The Eric Hoffer Book Award is an international literary prize established in his honor. The University of California, Berkeley awards an annual literary prize named jointly for Hoffer. Many elements of Hoffer's early life are in doubt and never verified, but in autobiographical statements, Hoffer claimed to have been born in 1902 in The Bronx, New York City, New York, to Knut and Elsa (Goebel) Hoffer. His parents were immigrants from Alsace, then part of Imperial Germany. By age five, Hoffer could already read in both English and his parents' native German. When he was five, his mother fell down the stairs with him in her arms. He later recalled, "I lost my sight at the age of seven. Two years before, my mother and I fell down a flight of stairs. She did not recover and died in that second year after the fall. I lost my sight and, for a time, my memory." Hoffer spoke with a pronounced German accent all his life, and spoke the language fluently. He was raised by a live-in relative or servant, a German immigrant named Martha. His eyesight inexplicably returned when he was 15. Fearing he might lose it again, he seized on the opportunity to read as much as he could. His recovery proved permanent, but Hoffer never abandoned his reading habit. Hoffer was a young man when he also lost his father. The cabinetmaker's union paid for Knut Hoffer's funeral and gave Hoffer about $300 insurance money. He took a bus to Los Angeles and spent the next 10 years wandering, as he remembered, “up and down the land, dodging hunger and grieving over the world.” Hoffer eventually landed on Skid Row, reading, occasionally writing, and working at odd jobs. In 1931, he considered suicide by drinking a solution of oxalic acid, but he could not bring himself to do it. He left Skid Row and became a migrant worker, following the harvests in California. He acquired a library card where he worked, dividing his time "between the books and the brothels." He also prospected for gold in the mountains. Snowed in for the winter, he read the Essays by Michel de Montaigne. Montaigne impressed Hoffer deeply, and Hoffer often made reference to him. He also developed a respect for America's underclass, which he said was "lumpy with talent." He wrote a novel, Four Years in Young Hank's Life, and a novella, Chance and Mr. Kunze, both partly autobiographical. He also penned a long article based on his experiences in a federal work camp, "Tramps and Pioneers." It was never published, but a truncated version appeared in Harper's Magazine after he became well known. Hoffer tried to enlist in the U.S. Army at age 40 during World War II, but he was rejected due to a hernia. Instead, he began work as a longshoreman on the docks of San Francisco in 1943. At the same time, he began to write seriously. Hoffer left the docks in 1964, and shortly after became an adjunct professor at the University of California, Berkeley. He later retired from public life in 1970. “I'm going to crawl back into my hole where I started,” he said. “I don't want to be a public person or anybody's spokesman... Any man can ride a train. Only a wise man knows when to get off.” In 1970, he endowed the Lili Fabilli and Eric Hoffer Laconic Essay Prize for students, faculty, and staff at the University of California, Berkeley. Hoffer called himself an atheist but had sympathetic views of religion and described it as a positive force. He died at his home in San Francisco in 1983 at the age of 80. Hoffer was influenced by his modest roots and working-class surroundings, seeing in it vast human potential. In a letter to Margaret Anderson in 1941, he wrote: "My writing is done in railroad yards while waiting for a freight, in the fields while waiting for a truck, and at noon after lunch. Towns are too distracting." He once remarked, "my writing grows out of my life just as a branch from a tree." When he was called an intellectual, he insisted that he simply was a longshoreman. Hoffer has been dubbed by some authors a "longshoreman philosopher." Hoffer, who was an only child, never married. He fathered a child with Lili Fabilli Osborne, named Eric Osborne, who was born in 1955 and raised by Lili Osborne and her husband, Selden Osborne. Lili Fabilli Osborne had become acquainted with Hoffer through her husband, a fellow longshoreman and acquaintance of Hoffer's. Despite this, Selden Osborne and Hoffer remained on good terms. Hoffer referred to Eric Osborne as his son or godson. Lili Fabilli Osborne died in 2010 at the age of 93. Prior to her death, Osborne was the executor of Hoffer's estate, and vigorously controlled the rights to his intellectual property. In his 2012 book Eric Hoffer: The Longshoreman Philosopher, journalist Tom Bethell revealed doubts about Hoffer's account of his early life. Although Hoffer claimed his parents were from Alsace-Lorraine, Hoffer himself spoke with a pronounced Bavarian accent. He claimed to have been born and raised in the Bronx but had no Bronx accent. His lover and executor Lili Fabilli stated that she always thought Hoffer was an immigrant. Her son, Eric Fabilli, said that Hoffer's life might have been comparable to that of B. Traven and considered hiring a genealogist to investigate Hoffer's early life, to which Hoffer reportedly replied, "Are you sure you want to know?" Pescadero land-owner Joe Gladstone, a family friend of the Fabilli's who also knew Hoffer, said of Hoffer's account of his early life: "I don't believe a word of it." To this day, no one ever has claimed to have known Hoffer in his youth, and no records apparently exist of his parents, nor indeed of Hoffer himself until he was about forty, when his name appeared in a census. Hoffer came to public attention with the 1951 publication of his first book, The True Believer: Thoughts on the Nature of Mass Movements, which consists of a preface and 125 sections, which are divided into 18 chapters. Hoffer analyzes the phenomenon of "mass movements," a general term that he applies to revolutionary parties, nationalistic movements, and religious movements. He summarizes his thesis in §113: "A movement is pioneered by men of words, materialized by fanatics and consolidated by men of actions." Hoffer argues that fanatical and extremist cultural movements, whether religious, social, or national, arise when large numbers of frustrated people, believing their own individual lives to be worthless or spoiled, join a movement demanding radical change. But the real attraction for this population is an escape from the self, not a realization of individual hopes: "A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation." Hoffer consequently argues that the appeal of mass movements is interchangeable: in the Germany of the 1920s and the 1930s, for example, the Communists and National Socialists were ostensibly enemies, but sometimes enlisted each other's members, since they competed for the same kind of marginalized, angry, frustrated people. For the "true believer," Hoffer argues that particular beliefs are less important than escaping from the burden of the autonomous self. Harvard historian Arthur M. Schlesinger Jr. said of The True Believer: "This brilliant and original inquiry into the nature of mass movements is a genuine contribution to our social thought." Subsequent to the publication of The True Believer (1951), Eric Hoffer touched upon Asia and American interventionism in several of his essays. In "The Awakening of Asia" (1954), published in The Reporter and later his book The Ordeal of Change (1963), Hoffer discusses the reasons for unrest on the continent. In particular, he argues that the root cause of social discontent in Asia was not government corruption, "communist agitation," or the legacy of European colonial "oppression and exploitation," but rather that a "craving for pride" was the central problem in Asia, suggesting a problem that could not be relieved through typical American intervention. During the Vietnam War, despite his objections to the antiwar movement and acceptance of the notion that the war was somehow necessary to prevent a third world war, Hoffer remained skeptical concerning American interventionism, specifically the intelligence with which the war was being conducted in Southeast Asia. After the United States became involved in the war, Hoffer wished to avoid defeat in Vietnam because of his fear that such a defeat would transform American society for ill, opening the door to those who would preach a stab-in-the-back myth and allow for the rise of an American version of Hitler. In The Temper of Our Time (1967), Hoffer implies that the United States as a rule should avoid interventions in the first place: "the better part of statesmanship might be to know clearly and precisely what not to do, and leave action to the improvisation of chance." In fact, Hoffer indicates that "it might be wise to wait for enemies to defeat themselves," as they might fall upon each other with the United States out of the picture. The view was somewhat borne out with the Cambodian-Vietnamese War and Chinese-Vietnamese War of the late 1970s. Hoffer's papers, including 131 of the notebooks he carried in his pockets, were acquired in 2000 by the Hoover Institution Archives. The papers fill 75 feet (23 m) of shelf space. Because Hoffer cultivated an aphoristic style, the unpublished notebooks (dated from 1949 to 1977) contain very significant work. Although available for scholarly study since at least 2003, little of their contents has been published. A selection of fifty aphorisms, focusing on the development of unrealized human talents through the creative process, appeared in the July 2005 issue of Harper's Magazine.
[ { "paragraph_id": 0, "text": "Eric Hoffer (July 25, 1902 – May 21, 1983) was an American moral and social philosopher. He was the author of ten books and was awarded the Presidential Medal of Freedom in February 1983. His first book, The True Believer (1951), was widely recognized as a classic, receiving critical acclaim from both scholars and laymen, although Hoffer believed that The Ordeal of Change (1963) was his finest work. The Eric Hoffer Book Award is an international literary prize established in his honor. The University of California, Berkeley awards an annual literary prize named jointly for Hoffer.", "title": "" }, { "paragraph_id": 1, "text": "Many elements of Hoffer's early life are in doubt and never verified, but in autobiographical statements, Hoffer claimed to have been born in 1902 in The Bronx, New York City, New York, to Knut and Elsa (Goebel) Hoffer. His parents were immigrants from Alsace, then part of Imperial Germany. By age five, Hoffer could already read in both English and his parents' native German. When he was five, his mother fell down the stairs with him in her arms. He later recalled, \"I lost my sight at the age of seven. Two years before, my mother and I fell down a flight of stairs. She did not recover and died in that second year after the fall. I lost my sight and, for a time, my memory.\" Hoffer spoke with a pronounced German accent all his life, and spoke the language fluently. He was raised by a live-in relative or servant, a German immigrant named Martha. His eyesight inexplicably returned when he was 15. Fearing he might lose it again, he seized on the opportunity to read as much as he could. His recovery proved permanent, but Hoffer never abandoned his reading habit.", "title": "Early life" }, { "paragraph_id": 2, "text": "Hoffer was a young man when he also lost his father. The cabinetmaker's union paid for Knut Hoffer's funeral and gave Hoffer about $300 insurance money. He took a bus to Los Angeles and spent the next 10 years wandering, as he remembered, “up and down the land, dodging hunger and grieving over the world.” Hoffer eventually landed on Skid Row, reading, occasionally writing, and working at odd jobs.", "title": "Early life" }, { "paragraph_id": 3, "text": "In 1931, he considered suicide by drinking a solution of oxalic acid, but he could not bring himself to do it. He left Skid Row and became a migrant worker, following the harvests in California. He acquired a library card where he worked, dividing his time \"between the books and the brothels.\" He also prospected for gold in the mountains. Snowed in for the winter, he read the Essays by Michel de Montaigne. Montaigne impressed Hoffer deeply, and Hoffer often made reference to him. He also developed a respect for America's underclass, which he said was \"lumpy with talent.\"", "title": "Early life" }, { "paragraph_id": 4, "text": "He wrote a novel, Four Years in Young Hank's Life, and a novella, Chance and Mr. Kunze, both partly autobiographical. He also penned a long article based on his experiences in a federal work camp, \"Tramps and Pioneers.\" It was never published, but a truncated version appeared in Harper's Magazine after he became well known.", "title": "Career" }, { "paragraph_id": 5, "text": "Hoffer tried to enlist in the U.S. Army at age 40 during World War II, but he was rejected due to a hernia. Instead, he began work as a longshoreman on the docks of San Francisco in 1943. At the same time, he began to write seriously.", "title": "Career" }, { "paragraph_id": 6, "text": "Hoffer left the docks in 1964, and shortly after became an adjunct professor at the University of California, Berkeley. He later retired from public life in 1970. “I'm going to crawl back into my hole where I started,” he said. “I don't want to be a public person or anybody's spokesman... Any man can ride a train. Only a wise man knows when to get off.” In 1970, he endowed the Lili Fabilli and Eric Hoffer Laconic Essay Prize for students, faculty, and staff at the University of California, Berkeley.", "title": "Career" }, { "paragraph_id": 7, "text": "Hoffer called himself an atheist but had sympathetic views of religion and described it as a positive force.", "title": "Career" }, { "paragraph_id": 8, "text": "He died at his home in San Francisco in 1983 at the age of 80.", "title": "Career" }, { "paragraph_id": 9, "text": "Hoffer was influenced by his modest roots and working-class surroundings, seeing in it vast human potential. In a letter to Margaret Anderson in 1941, he wrote: \"My writing is done in railroad yards while waiting for a freight, in the fields while waiting for a truck, and at noon after lunch. Towns are too distracting.\" He once remarked, \"my writing grows out of my life just as a branch from a tree.\" When he was called an intellectual, he insisted that he simply was a longshoreman. Hoffer has been dubbed by some authors a \"longshoreman philosopher.\"", "title": "Working-class roots" }, { "paragraph_id": 10, "text": "Hoffer, who was an only child, never married. He fathered a child with Lili Fabilli Osborne, named Eric Osborne, who was born in 1955 and raised by Lili Osborne and her husband, Selden Osborne. Lili Fabilli Osborne had become acquainted with Hoffer through her husband, a fellow longshoreman and acquaintance of Hoffer's. Despite this, Selden Osborne and Hoffer remained on good terms.", "title": "Personal life" }, { "paragraph_id": 11, "text": "Hoffer referred to Eric Osborne as his son or godson. Lili Fabilli Osborne died in 2010 at the age of 93. Prior to her death, Osborne was the executor of Hoffer's estate, and vigorously controlled the rights to his intellectual property.", "title": "Personal life" }, { "paragraph_id": 12, "text": "In his 2012 book Eric Hoffer: The Longshoreman Philosopher, journalist Tom Bethell revealed doubts about Hoffer's account of his early life. Although Hoffer claimed his parents were from Alsace-Lorraine, Hoffer himself spoke with a pronounced Bavarian accent. He claimed to have been born and raised in the Bronx but had no Bronx accent. His lover and executor Lili Fabilli stated that she always thought Hoffer was an immigrant. Her son, Eric Fabilli, said that Hoffer's life might have been comparable to that of B. Traven and considered hiring a genealogist to investigate Hoffer's early life, to which Hoffer reportedly replied, \"Are you sure you want to know?\" Pescadero land-owner Joe Gladstone, a family friend of the Fabilli's who also knew Hoffer, said of Hoffer's account of his early life: \"I don't believe a word of it.\" To this day, no one ever has claimed to have known Hoffer in his youth, and no records apparently exist of his parents, nor indeed of Hoffer himself until he was about forty, when his name appeared in a census.", "title": "Personal life" }, { "paragraph_id": 13, "text": "Hoffer came to public attention with the 1951 publication of his first book, The True Believer: Thoughts on the Nature of Mass Movements, which consists of a preface and 125 sections, which are divided into 18 chapters. Hoffer analyzes the phenomenon of \"mass movements,\" a general term that he applies to revolutionary parties, nationalistic movements, and religious movements. He summarizes his thesis in §113: \"A movement is pioneered by men of words, materialized by fanatics and consolidated by men of actions.\"", "title": "Books and opinions" }, { "paragraph_id": 14, "text": "Hoffer argues that fanatical and extremist cultural movements, whether religious, social, or national, arise when large numbers of frustrated people, believing their own individual lives to be worthless or spoiled, join a movement demanding radical change. But the real attraction for this population is an escape from the self, not a realization of individual hopes: \"A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation.\"", "title": "Books and opinions" }, { "paragraph_id": 15, "text": "Hoffer consequently argues that the appeal of mass movements is interchangeable: in the Germany of the 1920s and the 1930s, for example, the Communists and National Socialists were ostensibly enemies, but sometimes enlisted each other's members, since they competed for the same kind of marginalized, angry, frustrated people. For the \"true believer,\" Hoffer argues that particular beliefs are less important than escaping from the burden of the autonomous self.", "title": "Books and opinions" }, { "paragraph_id": 16, "text": "Harvard historian Arthur M. Schlesinger Jr. said of The True Believer: \"This brilliant and original inquiry into the nature of mass movements is a genuine contribution to our social thought.\"", "title": "Books and opinions" }, { "paragraph_id": 17, "text": "Subsequent to the publication of The True Believer (1951), Eric Hoffer touched upon Asia and American interventionism in several of his essays. In \"The Awakening of Asia\" (1954), published in The Reporter and later his book The Ordeal of Change (1963), Hoffer discusses the reasons for unrest on the continent. In particular, he argues that the root cause of social discontent in Asia was not government corruption, \"communist agitation,\" or the legacy of European colonial \"oppression and exploitation,\" but rather that a \"craving for pride\" was the central problem in Asia, suggesting a problem that could not be relieved through typical American intervention.", "title": "Books and opinions" }, { "paragraph_id": 18, "text": "During the Vietnam War, despite his objections to the antiwar movement and acceptance of the notion that the war was somehow necessary to prevent a third world war, Hoffer remained skeptical concerning American interventionism, specifically the intelligence with which the war was being conducted in Southeast Asia. After the United States became involved in the war, Hoffer wished to avoid defeat in Vietnam because of his fear that such a defeat would transform American society for ill, opening the door to those who would preach a stab-in-the-back myth and allow for the rise of an American version of Hitler.", "title": "Books and opinions" }, { "paragraph_id": 19, "text": "In The Temper of Our Time (1967), Hoffer implies that the United States as a rule should avoid interventions in the first place: \"the better part of statesmanship might be to know clearly and precisely what not to do, and leave action to the improvisation of chance.\" In fact, Hoffer indicates that \"it might be wise to wait for enemies to defeat themselves,\" as they might fall upon each other with the United States out of the picture. The view was somewhat borne out with the Cambodian-Vietnamese War and Chinese-Vietnamese War of the late 1970s.", "title": "Books and opinions" }, { "paragraph_id": 20, "text": "Hoffer's papers, including 131 of the notebooks he carried in his pockets, were acquired in 2000 by the Hoover Institution Archives. The papers fill 75 feet (23 m) of shelf space. Because Hoffer cultivated an aphoristic style, the unpublished notebooks (dated from 1949 to 1977) contain very significant work. Although available for scholarly study since at least 2003, little of their contents has been published. A selection of fifty aphorisms, focusing on the development of unrealized human talents through the creative process, appeared in the July 2005 issue of Harper's Magazine.", "title": "Papers" } ]
Eric Hoffer was an American moral and social philosopher. He was the author of ten books and was awarded the Presidential Medal of Freedom in February 1983. His first book, The True Believer (1951), was widely recognized as a classic, receiving critical acclaim from both scholars and laymen, although Hoffer believed that The Ordeal of Change (1963) was his finest work. The Eric Hoffer Book Award is an international literary prize established in his honor. The University of California, Berkeley awards an annual literary prize named jointly for Hoffer.
2001-07-22T20:26:21Z
2023-09-08T12:58:25Z
[ "Template:Main article", "Template:Find a Grave", "Template:Reflist", "Template:Cite web", "Template:Cite book", "Template:Cite journal", "Template:Short description", "Template:Infobox writer", "Template:ISBN", "Template:Dead link", "Template:Wikiquote", "Template:Authority control", "Template:Use mdy dates", "Template:Cite encyclopedia", "Template:Webarchive", "Template:Convert", "Template:Page needed", "Template:Cite news" ]
https://en.wikipedia.org/wiki/Eric_Hoffer
9,577
European Coal and Steel Community
The European Coal and Steel Community (ECSC) was a European organization created after World War II to integrate Europe's coal and steel industries into a single common market based on the principle of supranationalism. It was formally established in 1951 by the Treaty of Paris, signed by Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany. The organization's subsequent enlargement of both members and duties ultimately led to the creation of the European Union. The ECSC was first proposed via the Schuman Declaration by French foreign minister Robert Schuman on 9 May 1950 (commemorated in the EU as Europe Day), the day after the fifth anniversary of the end of World War II, to prevent another war between France and Germany. He declared "the solidarity in production" from pooling "coal and steel production" would make war between the two "not only unthinkable but materially impossible". The Treaty created a common market among member states that stipulated free movement of goods (without customs duties or taxes) and prohibited states from introducing unfair competitive or discriminatory practices. Its terms were enforced by four institutions: a High Authority composed of independent appointees, a Common Assembly composed of national parliamentarians, a Special Council composed of national ministers, and a Court of Justice. These would ultimately form the blueprint for today's European Commission, European Parliament, the Council of the European Union, and the Court of Justice of the European Union, respectively. The ECSC set an example for the pan-European organizations created by the Treaty of Rome in 1957: the European Economic Community and European Atomic Energy Community, with whom it shared its membership and some institutions. The 1967 Merger (Brussels) Treaty merged the ECSC's institutions into the European Economic Community, but the former retained its own independent legal personality until the Treaty of Paris expired in 2002, leaving its activities fully absorbed by the European Community under the frameworks of the Treaties of Amsterdam and Nice. As Prime Minister and Foreign Minister, Schuman was instrumental in turning French policy away from the Gaullist objective of permanent occupation or control of parts of German territory such as the Ruhr or the Saar. Despite stiff ultra-nationalist, Gaullist and communist opposition, the French Assembly voted a number of resolutions in favour of his new policy of integrating Germany into a community. The International Authority for the Ruhr changed in consequence. The Schuman Declaration had the stated aim of preventing further antagonism between France and Germany and among other European states by tackling the root cause of war through the establishment of common foundations for economic development. Schuman proposed the formation of the ECSC primarily with France and Germany in mind: "The coming together of the nations of Europe requires the elimination of the age-old opposition of France and Germany. Any action taken must in the first place concern these two countries." Portraying the coal and steel industries as integral to the production of munitions, Schuman proposed that uniting these two industries across France and Germany under an innovative supranational system (that also included a European anti-cartel agency) would "make war between France and Germany [...] not only unthinkable but materially impossible". Following the Schuman Declaration in May 1950, negotiations on what became the Treaty of Paris (1951) began on 20 June 1950. The objective of the treaty was to create a single market in the coal and steel industries of the member states. Customs duties, subsidies, discriminatory and restrictive practices were all to be abolished. The single market was to be supervised by a High Authority, with powers to handle extreme shortages of supply or demand, to tax, and to prepare production forecasts as guidelines for investment. A key issue in the negotiations for the treaty was the break-up of the excessive concentrations in the coal and steel industries of the Ruhr, where the Konzerne, or trusts, had underlain the military power of the former Reich. The Germans regarded the concentration of coal and steel as one of the bases of their economic efficiency, and a right. The steel barons were a formidable lobby because they embodied a national tradition. The US was not officially part of the treaty negotiations, but it was a major force behind the scenes. The US High Commissioner for Occupied Germany, John McCloy, was an advocate of decartelization and his chief advisor in Germany was a Harvard anti-trust lawyer, Robert Bowie. Bowie was asked to draft anti-trust articles, and texts of the two articles he prepared (on cartels and the abuse of monopoly power) became the basis of the treaty's competition policy regime. Also, Raymond Vernon (of later fame for his studies on industrial policy at Harvard university) was passing every clause of successive drafts of the treaty under his microscope down in the bowels of the State Department. He stressed the importance of the freedom of the projected common market from restrictive practices. The Americans insisted that the German coal sales monopoly, the Deutscher Kohlenverkauf (DKV), should lose its monopoly, and that the steel industries should no longer own the coalmines. It was agreed that the DKV would be broken up into four independent sales agencies. The steel firm Vereinigte Stahlwerke was to be divided into thirteen firms, and Krupp into two. Ten years after the Schuman negotiations, a US State Department official noted that while the articles as finally agreed were more qualified than American officials in touch with the negotiations would have wished, they were "almost revolutionary" in terms of the traditional European approach to these basic industries. In West Germany, Karl Arnold, the Minister President of North Rhine-Westphalia, the state that included the coal and steel producing Ruhr, was initially spokesman for German foreign affairs. He gave a number of speeches and broadcasts on a supranational coal and steel community at the same time as Robert Schuman began to propose this Community in 1948 and 1949. The Social Democratic Party of Germany (German: Sozialdemokratische Partei Deutschlands, SPD), in spite of support from unions and other socialists in Europe, decided it would oppose the Schuman plan. Kurt Schumacher's personal distrust of France, capitalism, and Konrad Adenauer aside, he claimed that a focus on integrating with a "Little Europe of the Six" would override the SPD's prime objective of German reunification and thus empower ultra-nationalist and Communist movements in democratic countries. He also thought the ECSC would end any hopes of nationalising the steel industry and lock in a Europe of "cartels, clerics and conservatives". Younger members of the party like Carlo Schmid, were, however, in favor of the Community and pointed to the long socialist support for the supranational idea. In France, Schuman had gained strong political and intellectual support from all sections of the nation and many non-communist parties. Notable amongst these were ministerial colleague Andre Philip, president of the Foreign Relations Committee Edouard Bonnefous, and former prime minister, Paul Reynaud. Projects for a coal and steel authority and other supranational communities were formulated in specialist subcommittees of the Council of Europe in the period before it became French government policy. Charles de Gaulle, who was then out of power, had been an early supporter of "linkages" between economies, on French terms, and had spoken in 1945 of a "European confederation" that would exploit the resources of the Ruhr. However, he opposed the ECSC as a faux (false) pooling ("le pool, ce faux semblant") because he considered it an unsatisfactory "piecemeal approach" to European unity and because he considered the French government "too weak" to dominate the ECSC as he thought proper. De Gaulle also felt that the ECSC had an insufficient supranational mandate because its Assembly was not ratified by a European referendum and he did not accept Raymond Aron's contention that the ECSC was intended as a movement away from United States domination. Consequently, de Gaulle and his followers in the RPF voted against ratification in the lower house of the French Parliament. Despite these attacks and those from the extreme left, the ECSC found substantial public support. It gained strong majority votes in all eleven chambers of the parliaments of the Six, as well as approval among associations and European public opinion. In 1950, many had thought another war was inevitable. The steel and coal interests, however, were quite vocal in their opposition. The Council of Europe, created by a proposal of Schuman's first government in May 1948, helped articulate European public opinion and gave the Community idea positive support. The UK Prime Minister Clement Attlee opposed Britain joining the proposed European Coal and Steel Community, saying that he 'would not accept the [UK] economy being handed over to an authority that is utterly undemocratic and is responsible to nobody.' The 100-article Treaty of Paris, which established the ECSC, was signed on 18 April 1951 by "the inner six": France, West Germany, Italy, Belgium, the Netherlands and Luxembourg. The ECSC was based on supranational principles and was, through the establishment of a common market for coal and steel, intended to expand the economy, increase employment, and raise the standard of living within the Community. The market was also intended to progressively rationalise the distribution of production whilst ensuring stability and employment. The common market for coal was opened on 10 February 1953, and for steel on 1 May 1953. Upon taking effect, the ECSC replaced the International Authority for the Ruhr. On 11 August 1952, the United States was the first non-ECSC member to recognise the Community and stated it would now deal with the ECSC on coal and steel matters, establishing its delegation in Brussels. Monnet responded by choosing Washington, D.C. as the site of the ECSC's first external presence. The headline of the delegation's first bulletin read "Towards a Federal Government of Europe". Six years after the Treaty of Paris, the Treaties of Rome were signed by the six ECSC members, creating the European Economic Community (EEC) and the European Atomic Energy Community (EAEC or Euratom). These Communities were based, with some adjustments, on the ECSC. The Treaties of Rome were to be in force indefinitely, unlike the Treaty of Paris, which was to last for a renewable period of fifty years. These two new Communities worked on the creation of a customs union and nuclear power community respectively. Despite being separate legal entities, the ECSC, EEC and Euratom initially shared the Common Assembly and the European Court of Justice, although the Councils and the High Authority/Commissions remained separate. To avoid duplication, the Merger Treaty merged these separate bodies of the ECSC and Euratom with the EEC. The EEC later became one of the three pillars of the present day European Union. The Treaty of Paris was frequently amended as the EC and EU evolved and expanded. With the treaty due to expire in 2002, debate began at the beginning of the 1990s on what to do with it. It was eventually decided that it should be left to expire. The areas covered by the ECSC's treaty were transferred to the Treaty of Rome and the financial loose ends and the ECSC research fund were dealt with via a protocol of the Treaty of Nice. The treaty finally expired on 23 July 2002. That day, the ECSC flag was lowered for the final time outside the European Commission in Brussels and replaced with the EU flag. The institutions of the ECSC were the High Authority, the Common Assembly, the Special Council of Ministers and the Court of Justice. A Consultative Committee was established alongside the High Authority, as a fifth institution representing producers, workers, consumers and dealers (article 18). These institutions were merged in 1967 with those of the European Community, except for the Consultative Committee, which continued to be independent until the expiration of the Treaty of Paris in 2002. The Treaty stated that the location of the institutions would be decided by common accord of the members, yet the issue was hotly contested. As a temporary compromise, the institutions were provisionally located in the City of Luxembourg, while the Assembly was based in Strasbourg. The High Authority (the predecessor to the European Commission) was a nine-member executive body which governed the ECSC. The Authority consisted of nine members in office for a term of six years, appointed by the governments of the six signatories. Two were from each of France, Germany and Italy; and one from each of Belgium, Luxembourg, and the Netherlands. These members appointed a person among themselves to be President of the High Authority. Despite being appointed by agreement of national governments acting together, the members were to pledge not to represent their national interest, but rather took an oath to defend the general interests of the Community as a whole. Their independence was aided by members being barred from having any occupation outside the Authority or having any business interests (paid or unpaid) during their tenure and for three years after they left office. To further ensure impartiality, one third of the membership was to be renewed every two years (article 10). The Authority had a broad area of competence to ensure the objectives of the treaty were met and that the common market functioned smoothly. The High Authority could issue three types of legal instruments: Decisions, which were entirely binding laws; Recommendations, which had binding aims but the methods were left to member states; and Opinions, which had no legal force. Up to the merger in 1967, the authority had five Presidents followed by an interim President serving for the final days. The Common Assembly (the forerunner to the European Parliament) was composed of 78 representatives: 18 from each of France, Germany, and Italy; 10 from Belgium and the Netherlands; and 4 from Luxembourg (article 21). It exercised supervisory powers over the executive High Authority (article 20). The Common Assembly representatives were to be national MPs delegated each year by their Parliaments to the Assembly or directly elected "by universal suffrage" (article 21), though in practice it was the former, as there was no requirement for elections until the Treaties of Rome and no actual election until 1979, as Rome required agreement in the council on the electoral system first. However, to emphasise that the chamber was not a traditional international organisation composed of representatives of national governments, the Treaty of Paris used the term "representatives of the peoples". Some hoped the Community would use the institutions (Assembly, Court) of the Council of Europe, and The Treaty's Protocol on Relations with the Council of Europe encouraged links between the two institutions' assemblies. The ECSC Assembly was intended as a democratic counter-weight and check to the High Authority, to advise but also to have power to sack the Authority (article 24). The first President (akin to a Speaker) was Paul-Henri Spaak. The Special Council of Ministers (the forerunner to the Council of the European Union) was composed of representatives of national governments. The Presidency was held by each state for a period of three months, rotating between them in alphabetical order. One of its key aspects was the harmonisation of the work of the High Authority and that of national governments. The council was also required to issue opinions on certain areas of work of the High Authority. Issues relating only to coal and steel were in the exclusive domain of the High Authority, and in these areas the council (unlike the modern Council) could only act as a scrutiny on the Authority. However, areas outside coal and steel required the consent of the council. The Court of Justice was to ensure the observation of ECSC law along with the interpretation and application of the Treaty. The Court was composed of seven judges, appointed by common accord of the national governments for six years. There were no requirements that the judges had to be of a certain nationality, simply that they be qualified and that their independence be beyond doubt. The Court was assisted by two Advocates General. The Consultative Committee (forerunner to the Economic and Social Committee) had between 30 and 51 members equally divided between producers, workers, consumers and dealers in the coal and steel sector (article 18). There were no national quotas, and the treaty required representatives of European associations to organise their own democratic procedures. They were to establish rules to make their membership fully representative for democratic organised civil society. Members were appointed for two years and were not bound by any mandate or instruction of the organisations which appointed them. The committee had a plenary assembly, bureau and president. Nomination of these members remained in the hands of the council. The High Authority was obliged to consult the Committee in certain cases where it was appropriate and to keep it informed. The Consultative Committee remained separate (despite the merger of the other institutions) until 2002, when the Treaty expired and its duties were taken over by the Economic and Social Committee (ESC). After the original six members, the ESCS expanded to all members of the European Economic Community (later renamed the European Community) and the European Union (15 countries in 2002 at the time of the expiry of the Treaty of Paris). Schuman described the goal as to "make war not only unthinkable but materially impossible" for signatory States. This is described in the treaty's preamble. It commences quoting the French Government Proposal of Schuman: "World peace cannot be safeguarded without creative measures commensurate with the dangers which threaten it." Europe had been at the centre of world wars. For this the Community created the world's first international anti-cartel agency. Treaty Chapters VI Ententes et Concentrations and VII on the Free Market describe joint action against cartels and trusts which were instrumental in world war arms races, and activities leading to the disruption of the free market. The Six Founder Member States are now living in the longest period of peace in more than 2000 years of their histories. The economic mission of the ECSC (article 2) was to "contribute to economic expansion, the development of employment and the improvement of the standard of living in participating countries". Writing in Le Monde in 1970, Gilbert Mathieu argued the Community had little effect on coal and steel production, which was influenced more by global trends. From 1952, oil, gas, and electricity became competitors to coal, so the 28% reduction in the amount of coal mined in the Six had little connection with the Treaty of Paris. However, the Treaty caused costs to be reduced by the abolition of discriminatory railway tariffs, and this promoted trade between members: steel trade increased tenfold. The High Authority also issued 280 modernization loans which helped the industry to improve output and reduce costs. Mathieu claims the ECSC failed to achieve several fundamental aims of the Treaty of Paris. He argues that the "pool" did not prevent the resurgence of large coal and steel groups, such as the Konzerne, which helped Adolf Hitler build his war machine. The cartels and major companies re-emerged, leading to apparent price fixing. Furthermore, the Community failed to define a common energy policy. Mathieu also argues the ECSC fell short of ensuring an upward equalisation of pay of workers within the industry. These failures could be put down to overambition in a short period of time, or that the goals were merely political posturing to be ignored. The ECSC's greatest achievements relate to welfare issues, according to Mathieu. Some miners had extremely poor housing and over 15 years the ECSC financed 112,500 flats for workers, paying US$1,770 per flat, enabling workers to buy a home they could not have otherwise afforded. The ECSC also paid half the occupational redeployment costs of those workers who had lost their jobs as coal and steel facilities began to close down. Combined with regional redevelopment aid the ECSC spent $150 million (835 million francs) creating around 100,000 jobs, a third of which were offered to unemployed coal and steel workers. The welfare guarantees invented by the ECSC were copied and extended by several of the Six to workers outside the coal and steel sectors. Far more important than creating Europe's first social and regional policy, Robert Schuman argued that the ECSC introduced European peace. It involved the continent's first European tax. This was a flat tax, a levy on production with a maximum rate of one percent. Given that the European Community countries are now experiencing the longest period of peace in more than seventy years, this has been described as the cheapest tax for peace in history. Another world war, or "world suicide" as Schuman called this threat in 1949, was avoided.
[ { "paragraph_id": 0, "text": "The European Coal and Steel Community (ECSC) was a European organization created after World War II to integrate Europe's coal and steel industries into a single common market based on the principle of supranationalism. It was formally established in 1951 by the Treaty of Paris, signed by Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany. The organization's subsequent enlargement of both members and duties ultimately led to the creation of the European Union.", "title": "" }, { "paragraph_id": 1, "text": "The ECSC was first proposed via the Schuman Declaration by French foreign minister Robert Schuman on 9 May 1950 (commemorated in the EU as Europe Day), the day after the fifth anniversary of the end of World War II, to prevent another war between France and Germany. He declared \"the solidarity in production\" from pooling \"coal and steel production\" would make war between the two \"not only unthinkable but materially impossible\". The Treaty created a common market among member states that stipulated free movement of goods (without customs duties or taxes) and prohibited states from introducing unfair competitive or discriminatory practices.", "title": "" }, { "paragraph_id": 2, "text": "Its terms were enforced by four institutions: a High Authority composed of independent appointees, a Common Assembly composed of national parliamentarians, a Special Council composed of national ministers, and a Court of Justice. These would ultimately form the blueprint for today's European Commission, European Parliament, the Council of the European Union, and the Court of Justice of the European Union, respectively.", "title": "" }, { "paragraph_id": 3, "text": "The ECSC set an example for the pan-European organizations created by the Treaty of Rome in 1957: the European Economic Community and European Atomic Energy Community, with whom it shared its membership and some institutions. The 1967 Merger (Brussels) Treaty merged the ECSC's institutions into the European Economic Community, but the former retained its own independent legal personality until the Treaty of Paris expired in 2002, leaving its activities fully absorbed by the European Community under the frameworks of the Treaties of Amsterdam and Nice.", "title": "" }, { "paragraph_id": 4, "text": "As Prime Minister and Foreign Minister, Schuman was instrumental in turning French policy away from the Gaullist objective of permanent occupation or control of parts of German territory such as the Ruhr or the Saar. Despite stiff ultra-nationalist, Gaullist and communist opposition, the French Assembly voted a number of resolutions in favour of his new policy of integrating Germany into a community. The International Authority for the Ruhr changed in consequence.", "title": "History" }, { "paragraph_id": 5, "text": "The Schuman Declaration had the stated aim of preventing further antagonism between France and Germany and among other European states by tackling the root cause of war through the establishment of common foundations for economic development. Schuman proposed the formation of the ECSC primarily with France and Germany in mind: \"The coming together of the nations of Europe requires the elimination of the age-old opposition of France and Germany. Any action taken must in the first place concern these two countries.\" Portraying the coal and steel industries as integral to the production of munitions, Schuman proposed that uniting these two industries across France and Germany under an innovative supranational system (that also included a European anti-cartel agency) would \"make war between France and Germany [...] not only unthinkable but materially impossible\".", "title": "History" }, { "paragraph_id": 6, "text": "Following the Schuman Declaration in May 1950, negotiations on what became the Treaty of Paris (1951) began on 20 June 1950. The objective of the treaty was to create a single market in the coal and steel industries of the member states. Customs duties, subsidies, discriminatory and restrictive practices were all to be abolished. The single market was to be supervised by a High Authority, with powers to handle extreme shortages of supply or demand, to tax, and to prepare production forecasts as guidelines for investment.", "title": "History" }, { "paragraph_id": 7, "text": "A key issue in the negotiations for the treaty was the break-up of the excessive concentrations in the coal and steel industries of the Ruhr, where the Konzerne, or trusts, had underlain the military power of the former Reich. The Germans regarded the concentration of coal and steel as one of the bases of their economic efficiency, and a right. The steel barons were a formidable lobby because they embodied a national tradition.", "title": "History" }, { "paragraph_id": 8, "text": "The US was not officially part of the treaty negotiations, but it was a major force behind the scenes. The US High Commissioner for Occupied Germany, John McCloy, was an advocate of decartelization and his chief advisor in Germany was a Harvard anti-trust lawyer, Robert Bowie. Bowie was asked to draft anti-trust articles, and texts of the two articles he prepared (on cartels and the abuse of monopoly power) became the basis of the treaty's competition policy regime. Also, Raymond Vernon (of later fame for his studies on industrial policy at Harvard university) was passing every clause of successive drafts of the treaty under his microscope down in the bowels of the State Department. He stressed the importance of the freedom of the projected common market from restrictive practices.", "title": "History" }, { "paragraph_id": 9, "text": "The Americans insisted that the German coal sales monopoly, the Deutscher Kohlenverkauf (DKV), should lose its monopoly, and that the steel industries should no longer own the coalmines. It was agreed that the DKV would be broken up into four independent sales agencies. The steel firm Vereinigte Stahlwerke was to be divided into thirteen firms, and Krupp into two. Ten years after the Schuman negotiations, a US State Department official noted that while the articles as finally agreed were more qualified than American officials in touch with the negotiations would have wished, they were \"almost revolutionary\" in terms of the traditional European approach to these basic industries.", "title": "History" }, { "paragraph_id": 10, "text": "In West Germany, Karl Arnold, the Minister President of North Rhine-Westphalia, the state that included the coal and steel producing Ruhr, was initially spokesman for German foreign affairs. He gave a number of speeches and broadcasts on a supranational coal and steel community at the same time as Robert Schuman began to propose this Community in 1948 and 1949. The Social Democratic Party of Germany (German: Sozialdemokratische Partei Deutschlands, SPD), in spite of support from unions and other socialists in Europe, decided it would oppose the Schuman plan. Kurt Schumacher's personal distrust of France, capitalism, and Konrad Adenauer aside, he claimed that a focus on integrating with a \"Little Europe of the Six\" would override the SPD's prime objective of German reunification and thus empower ultra-nationalist and Communist movements in democratic countries. He also thought the ECSC would end any hopes of nationalising the steel industry and lock in a Europe of \"cartels, clerics and conservatives\". Younger members of the party like Carlo Schmid, were, however, in favor of the Community and pointed to the long socialist support for the supranational idea.", "title": "History" }, { "paragraph_id": 11, "text": "In France, Schuman had gained strong political and intellectual support from all sections of the nation and many non-communist parties. Notable amongst these were ministerial colleague Andre Philip, president of the Foreign Relations Committee Edouard Bonnefous, and former prime minister, Paul Reynaud. Projects for a coal and steel authority and other supranational communities were formulated in specialist subcommittees of the Council of Europe in the period before it became French government policy. Charles de Gaulle, who was then out of power, had been an early supporter of \"linkages\" between economies, on French terms, and had spoken in 1945 of a \"European confederation\" that would exploit the resources of the Ruhr. However, he opposed the ECSC as a faux (false) pooling (\"le pool, ce faux semblant\") because he considered it an unsatisfactory \"piecemeal approach\" to European unity and because he considered the French government \"too weak\" to dominate the ECSC as he thought proper. De Gaulle also felt that the ECSC had an insufficient supranational mandate because its Assembly was not ratified by a European referendum and he did not accept Raymond Aron's contention that the ECSC was intended as a movement away from United States domination. Consequently, de Gaulle and his followers in the RPF voted against ratification in the lower house of the French Parliament.", "title": "History" }, { "paragraph_id": 12, "text": "Despite these attacks and those from the extreme left, the ECSC found substantial public support. It gained strong majority votes in all eleven chambers of the parliaments of the Six, as well as approval among associations and European public opinion. In 1950, many had thought another war was inevitable. The steel and coal interests, however, were quite vocal in their opposition. The Council of Europe, created by a proposal of Schuman's first government in May 1948, helped articulate European public opinion and gave the Community idea positive support.", "title": "History" }, { "paragraph_id": 13, "text": "The UK Prime Minister Clement Attlee opposed Britain joining the proposed European Coal and Steel Community, saying that he 'would not accept the [UK] economy being handed over to an authority that is utterly undemocratic and is responsible to nobody.'", "title": "History" }, { "paragraph_id": 14, "text": "The 100-article Treaty of Paris, which established the ECSC, was signed on 18 April 1951 by \"the inner six\": France, West Germany, Italy, Belgium, the Netherlands and Luxembourg. The ECSC was based on supranational principles and was, through the establishment of a common market for coal and steel, intended to expand the economy, increase employment, and raise the standard of living within the Community. The market was also intended to progressively rationalise the distribution of production whilst ensuring stability and employment. The common market for coal was opened on 10 February 1953, and for steel on 1 May 1953. Upon taking effect, the ECSC replaced the International Authority for the Ruhr.", "title": "History" }, { "paragraph_id": 15, "text": "On 11 August 1952, the United States was the first non-ECSC member to recognise the Community and stated it would now deal with the ECSC on coal and steel matters, establishing its delegation in Brussels. Monnet responded by choosing Washington, D.C. as the site of the ECSC's first external presence. The headline of the delegation's first bulletin read \"Towards a Federal Government of Europe\".", "title": "History" }, { "paragraph_id": 16, "text": "Six years after the Treaty of Paris, the Treaties of Rome were signed by the six ECSC members, creating the European Economic Community (EEC) and the European Atomic Energy Community (EAEC or Euratom). These Communities were based, with some adjustments, on the ECSC. The Treaties of Rome were to be in force indefinitely, unlike the Treaty of Paris, which was to last for a renewable period of fifty years. These two new Communities worked on the creation of a customs union and nuclear power community respectively.", "title": "History" }, { "paragraph_id": 17, "text": "Despite being separate legal entities, the ECSC, EEC and Euratom initially shared the Common Assembly and the European Court of Justice, although the Councils and the High Authority/Commissions remained separate. To avoid duplication, the Merger Treaty merged these separate bodies of the ECSC and Euratom with the EEC. The EEC later became one of the three pillars of the present day European Union.", "title": "History" }, { "paragraph_id": 18, "text": "The Treaty of Paris was frequently amended as the EC and EU evolved and expanded. With the treaty due to expire in 2002, debate began at the beginning of the 1990s on what to do with it. It was eventually decided that it should be left to expire. The areas covered by the ECSC's treaty were transferred to the Treaty of Rome and the financial loose ends and the ECSC research fund were dealt with via a protocol of the Treaty of Nice. The treaty finally expired on 23 July 2002. That day, the ECSC flag was lowered for the final time outside the European Commission in Brussels and replaced with the EU flag.", "title": "History" }, { "paragraph_id": 19, "text": "The institutions of the ECSC were the High Authority, the Common Assembly, the Special Council of Ministers and the Court of Justice. A Consultative Committee was established alongside the High Authority, as a fifth institution representing producers, workers, consumers and dealers (article 18). These institutions were merged in 1967 with those of the European Community, except for the Consultative Committee, which continued to be independent until the expiration of the Treaty of Paris in 2002.", "title": "Institutions" }, { "paragraph_id": 20, "text": "The Treaty stated that the location of the institutions would be decided by common accord of the members, yet the issue was hotly contested. As a temporary compromise, the institutions were provisionally located in the City of Luxembourg, while the Assembly was based in Strasbourg.", "title": "Institutions" }, { "paragraph_id": 21, "text": "The High Authority (the predecessor to the European Commission) was a nine-member executive body which governed the ECSC. The Authority consisted of nine members in office for a term of six years, appointed by the governments of the six signatories. Two were from each of France, Germany and Italy; and one from each of Belgium, Luxembourg, and the Netherlands. These members appointed a person among themselves to be President of the High Authority.", "title": "Institutions" }, { "paragraph_id": 22, "text": "Despite being appointed by agreement of national governments acting together, the members were to pledge not to represent their national interest, but rather took an oath to defend the general interests of the Community as a whole. Their independence was aided by members being barred from having any occupation outside the Authority or having any business interests (paid or unpaid) during their tenure and for three years after they left office. To further ensure impartiality, one third of the membership was to be renewed every two years (article 10).", "title": "Institutions" }, { "paragraph_id": 23, "text": "The Authority had a broad area of competence to ensure the objectives of the treaty were met and that the common market functioned smoothly. The High Authority could issue three types of legal instruments: Decisions, which were entirely binding laws; Recommendations, which had binding aims but the methods were left to member states; and Opinions, which had no legal force.", "title": "Institutions" }, { "paragraph_id": 24, "text": "Up to the merger in 1967, the authority had five Presidents followed by an interim President serving for the final days.", "title": "Institutions" }, { "paragraph_id": 25, "text": "The Common Assembly (the forerunner to the European Parliament) was composed of 78 representatives: 18 from each of France, Germany, and Italy; 10 from Belgium and the Netherlands; and 4 from Luxembourg (article 21). It exercised supervisory powers over the executive High Authority (article 20). The Common Assembly representatives were to be national MPs delegated each year by their Parliaments to the Assembly or directly elected \"by universal suffrage\" (article 21), though in practice it was the former, as there was no requirement for elections until the Treaties of Rome and no actual election until 1979, as Rome required agreement in the council on the electoral system first. However, to emphasise that the chamber was not a traditional international organisation composed of representatives of national governments, the Treaty of Paris used the term \"representatives of the peoples\". Some hoped the Community would use the institutions (Assembly, Court) of the Council of Europe, and The Treaty's Protocol on Relations with the Council of Europe encouraged links between the two institutions' assemblies. The ECSC Assembly was intended as a democratic counter-weight and check to the High Authority, to advise but also to have power to sack the Authority (article 24). The first President (akin to a Speaker) was Paul-Henri Spaak.", "title": "Institutions" }, { "paragraph_id": 26, "text": "The Special Council of Ministers (the forerunner to the Council of the European Union) was composed of representatives of national governments. The Presidency was held by each state for a period of three months, rotating between them in alphabetical order. One of its key aspects was the harmonisation of the work of the High Authority and that of national governments. The council was also required to issue opinions on certain areas of work of the High Authority. Issues relating only to coal and steel were in the exclusive domain of the High Authority, and in these areas the council (unlike the modern Council) could only act as a scrutiny on the Authority. However, areas outside coal and steel required the consent of the council.", "title": "Institutions" }, { "paragraph_id": 27, "text": "The Court of Justice was to ensure the observation of ECSC law along with the interpretation and application of the Treaty. The Court was composed of seven judges, appointed by common accord of the national governments for six years. There were no requirements that the judges had to be of a certain nationality, simply that they be qualified and that their independence be beyond doubt. The Court was assisted by two Advocates General.", "title": "Institutions" }, { "paragraph_id": 28, "text": "The Consultative Committee (forerunner to the Economic and Social Committee) had between 30 and 51 members equally divided between producers, workers, consumers and dealers in the coal and steel sector (article 18). There were no national quotas, and the treaty required representatives of European associations to organise their own democratic procedures. They were to establish rules to make their membership fully representative for democratic organised civil society. Members were appointed for two years and were not bound by any mandate or instruction of the organisations which appointed them. The committee had a plenary assembly, bureau and president. Nomination of these members remained in the hands of the council. The High Authority was obliged to consult the Committee in certain cases where it was appropriate and to keep it informed. The Consultative Committee remained separate (despite the merger of the other institutions) until 2002, when the Treaty expired and its duties were taken over by the Economic and Social Committee (ESC).", "title": "Institutions" }, { "paragraph_id": 29, "text": "After the original six members, the ESCS expanded to all members of the European Economic Community (later renamed the European Community) and the European Union (15 countries in 2002 at the time of the expiry of the Treaty of Paris).", "title": "Members" }, { "paragraph_id": 30, "text": "Schuman described the goal as to \"make war not only unthinkable but materially impossible\" for signatory States. This is described in the treaty's preamble. It commences quoting the French Government Proposal of Schuman:", "title": "Achievements and shortcomings" }, { "paragraph_id": 31, "text": "\"World peace cannot be safeguarded without creative measures commensurate with the dangers which threaten it.\" Europe had been at the centre of world wars. For this the Community created the world's first international anti-cartel agency. Treaty Chapters VI Ententes et Concentrations and VII on the Free Market describe joint action against cartels and trusts which were instrumental in world war arms races, and activities leading to the disruption of the free market.", "title": "Achievements and shortcomings" }, { "paragraph_id": 32, "text": "The Six Founder Member States are now living in the longest period of peace in more than 2000 years of their histories.", "title": "Achievements and shortcomings" }, { "paragraph_id": 33, "text": "The economic mission of the ECSC (article 2) was to \"contribute to economic expansion, the development of employment and the improvement of the standard of living in participating countries\". Writing in Le Monde in 1970, Gilbert Mathieu argued the Community had little effect on coal and steel production, which was influenced more by global trends. From 1952, oil, gas, and electricity became competitors to coal, so the 28% reduction in the amount of coal mined in the Six had little connection with the Treaty of Paris. However, the Treaty caused costs to be reduced by the abolition of discriminatory railway tariffs, and this promoted trade between members: steel trade increased tenfold. The High Authority also issued 280 modernization loans which helped the industry to improve output and reduce costs.", "title": "Achievements and shortcomings" }, { "paragraph_id": 34, "text": "Mathieu claims the ECSC failed to achieve several fundamental aims of the Treaty of Paris. He argues that the \"pool\" did not prevent the resurgence of large coal and steel groups, such as the Konzerne, which helped Adolf Hitler build his war machine. The cartels and major companies re-emerged, leading to apparent price fixing. Furthermore, the Community failed to define a common energy policy. Mathieu also argues the ECSC fell short of ensuring an upward equalisation of pay of workers within the industry. These failures could be put down to overambition in a short period of time, or that the goals were merely political posturing to be ignored.", "title": "Achievements and shortcomings" }, { "paragraph_id": 35, "text": "The ECSC's greatest achievements relate to welfare issues, according to Mathieu. Some miners had extremely poor housing and over 15 years the ECSC financed 112,500 flats for workers, paying US$1,770 per flat, enabling workers to buy a home they could not have otherwise afforded. The ECSC also paid half the occupational redeployment costs of those workers who had lost their jobs as coal and steel facilities began to close down. Combined with regional redevelopment aid the ECSC spent $150 million (835 million francs) creating around 100,000 jobs, a third of which were offered to unemployed coal and steel workers. The welfare guarantees invented by the ECSC were copied and extended by several of the Six to workers outside the coal and steel sectors.", "title": "Achievements and shortcomings" }, { "paragraph_id": 36, "text": "Far more important than creating Europe's first social and regional policy, Robert Schuman argued that the ECSC introduced European peace. It involved the continent's first European tax. This was a flat tax, a levy on production with a maximum rate of one percent. Given that the European Community countries are now experiencing the longest period of peace in more than seventy years, this has been described as the cheapest tax for peace in history. Another world war, or \"world suicide\" as Schuman called this threat in 1949, was avoided.", "title": "Achievements and shortcomings" } ]
The European Coal and Steel Community (ECSC) was a European organization created after World War II to integrate Europe's coal and steel industries into a single common market based on the principle of supranationalism. It was formally established in 1951 by the Treaty of Paris, signed by Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany. The organization's subsequent enlargement of both members and duties ultimately led to the creation of the European Union. The ECSC was first proposed via the Schuman Declaration by French foreign minister Robert Schuman on 9 May 1950, the day after the fifth anniversary of the end of World War II, to prevent another war between France and Germany. He declared "the solidarity in production" from pooling "coal and steel production" would make war between the two "not only unthinkable but materially impossible". The Treaty created a common market among member states that stipulated free movement of goods and prohibited states from introducing unfair competitive or discriminatory practices. Its terms were enforced by four institutions: a High Authority composed of independent appointees, a Common Assembly composed of national parliamentarians, a Special Council composed of national ministers, and a Court of Justice. These would ultimately form the blueprint for today's European Commission, European Parliament, the Council of the European Union, and the Court of Justice of the European Union, respectively. The ECSC set an example for the pan-European organizations created by the Treaty of Rome in 1957: the European Economic Community and European Atomic Energy Community, with whom it shared its membership and some institutions. The 1967 Merger (Brussels) Treaty merged the ECSC's institutions into the European Economic Community, but the former retained its own independent legal personality until the Treaty of Paris expired in 2002, leaving its activities fully absorbed by the European Community under the frameworks of the Treaties of Amsterdam and Nice.
2001-08-15T22:27:10Z
2023-11-22T02:12:53Z
[ "Template:Dead link", "Template:Rp", "Template:EU history", "Template:Cite book", "Template:European Union topics", "Template:Use Oxford spelling", "Template:Infobox Former International Organization", "Template:Further", "Template:Lang-de", "Template:Citation needed", "Template:Cite web", "Template:Use dmy dates", "Template:Good article", "Template:Main", "Template:Dts", "Template:Reflist", "Template:Webarchive", "Template:Commons category", "Template:Authority control", "Template:Short description" ]
https://en.wikipedia.org/wiki/European_Coal_and_Steel_Community
9,578
European Economic Community
The European Economic Community (EEC) was a regional organisation created by the Treaty of Rome of 1957, aiming to foster economic integration among its member states. It was subsequently renamed the European Community (EC) upon becoming integrated into the first pillar of the newly formed European Union in 1993. In the popular language, however, the singular European Community was sometimes inaccurately used in the wider sense of the plural European Communities, in spite of the latter designation covering all the three constituent entities of the first pillar. In 2009, the EC formally ceased to exist and its institutions were directly absorbed by the EU. This made the Union the formal successor institution of the Community. The Community's initial aim was to bring about economic integration, including a common market and customs union, among its six founding members: Belgium, France, Italy, Luxembourg, the Netherlands and West Germany. It gained a common set of institutions along with the European Coal and Steel Community (ECSC) and the European Atomic Energy Community (EURATOM) as one of the European Communities under the 1965 Merger Treaty (Treaty of Brussels). In 1993 a complete single market was achieved, known as the internal market, which allowed for the free movement of goods, capital, services, and people within the EEC. In 1994 the internal market was formalised by the EEA agreement. This agreement also extended the internal market to include most of the member states of the European Free Trade Association, forming the European Economic Area, which encompasses 15 countries. Upon the entry into force of the Maastricht Treaty in 1993, the EEC was renamed the European Community to reflect that it covered a wider range than economic policy. This was also when the three European Communities, including the EC, were collectively made to constitute the first of the three pillars of the European Union, which the treaty also founded. The EC existed in this form until it was abolished by the 2009 Treaty of Lisbon, which incorporated the EC's institutions into the EU's wider framework and provided that the EU would "replace and succeed the European Community". The EEC was also known as the European Common Market in the English-speaking countries and sometimes referred to as the European Community even before it was officially renamed as such in 1993. In April 1951, the Treaty of Paris was signed, creating the European Coal and Steel Community (ECSC). This was an international community based on supranationalism and international law, designed to help the economy of Europe and prevent future war by integrating its members. With the aim of creating a federal Europe two further communities were proposed: a European Defence Community and a European Political Community. While the treaty for the latter was being drawn up by the Common Assembly, the ECSC parliamentary chamber, the proposed defence community was rejected by the French Parliament. ECSC President Jean Monnet, a leading figure behind the communities, resigned from the High Authority in protest and began work on alternative communities, based on economic integration rather than political integration. Following the Messina Conference in 1955, Paul-Henri Spaak was given the task to prepare a report on the idea of a customs union. The so-called Spaak Report of the Spaak Committee formed the cornerstone of the intergovernmental negotiations at Val Duchesse conference centre in 1956. Together with the Ohlin Report the Spaak Report would provide the basis for the Treaty of Rome. In 1956, Paul-Henri Spaak led the Intergovernmental Conference on the Common Market and Euratom at the Val Duchesse conference centre, which prepared for the Treaty of Rome in 1957. The conference led to the signature, on 25 March 1957, of the Treaty of Rome establishing a European Economic Community. The resulting communities were the European Economic Community (EEC) and the European Atomic Energy Community (EURATOM or sometimes EAEC). These were markedly less supranational than the previous communities, due to protests from some countries that their sovereignty was being infringed (however there would still be concerns with the behaviour of the Hallstein Commission). Germany became a founding member of the EEC, and Konrad Adenauer was made leader in a very short time. The first formal meeting of the Hallstein Commission was held on 16 January 1958 at the Chateau de Val-Duchesse. The EEC (direct ancestor of the modern Community) was to create a customs union while Euratom would promote co-operation in the nuclear power sphere. The EEC rapidly became the most important of these and expanded its activities. The first move towards political developments came at the end of 1959 when the foreign ministers of the six members announced that would be meeting quarterly to discuss political issues and international problems. One of the first important accomplishments of the EEC was the establishment (1962) of common price levels for agricultural products. In 1968, internal tariffs (tariffs on trade between member nations) were removed on certain products. Another crisis was triggered in regard to proposals for the financing of the Common Agricultural Policy, which came into force in 1962. The transitional period whereby decisions were made by unanimity had come to an end, and majority-voting in the council had taken effect. Then-French President Charles de Gaulle's opposition to supranationalism and fear of the other members challenging the CAP led to an "empty chair policy" whereby French representatives were withdrawn from the European institutions until the French veto was reinstated. Eventually, a compromise was reached with the Luxembourg compromise on 29 January 1966 whereby a gentlemen's agreement permitted members to use a veto on areas of national interest. On 1 July 1967 when the Merger Treaty came into operation, combining the institutions of the ECSC and Euratom into that of the EEC, they already shared a Parliamentary Assembly and Courts. Collectively they were known as the European Communities. The Communities still had independent personalities although were increasingly integrated. Future treaties granted the community new powers beyond simple economic matters which had achieved a high level of integration. As it got closer to the goal of political integration and a peaceful and united Europe, what Mikhail Gorbachev described as a Common European Home. The 1960s saw the first attempts at enlargement. In 1961, Denmark, Ireland, the United Kingdom and Norway (in 1962), applied to join the three Communities. However, President Charles de Gaulle saw British membership as a Trojan Horse for U.S. influence and vetoed membership, and the applications of all four countries were suspended. Greece became the first country to join the EC in 1961 as an associate member, however its membership was suspended in 1967 after a coup d'état established a military dictatorship called the Regime of the Colonels. A year later, in February 1962, Spain attempted to join the European Community. However, because Francoist Spain was not a democracy, all members rejected the request in 1964. The four countries resubmitted their applications on 11 May 1967 and with Georges Pompidou succeeding Charles de Gaulle as French president in 1969, the veto was lifted. Negotiations began in 1970 under the pro-European UK government of Edward Heath, who had to deal with disagreements relating to the Common Agricultural Policy and the UK's relationship with the Commonwealth of Nations. Nevertheless, two years later the accession treaties were signed so that Denmark, Ireland and the UK joined the Community effective 1 January 1973. The Norwegian people had rejected membership in a referendum on 25 September 1972. The Treaties of Rome had stated that the European Parliament must be directly elected, however this required the Council to agree on a common voting system first. The Council procrastinated on the issue and the Parliament remained appointed, French President Charles de Gaulle was particularly active in blocking the development of the Parliament, with it only being granted Budgetary powers following his resignation. Parliament pressured for agreement and on 20 September 1976 the Council agreed part of the necessary instruments for election, deferring details on electoral systems which remain varied to this day. During the tenure of President Jenkins, in June 1979, the elections were held in all the then-members (see 1979 European Parliament election). The new Parliament, galvanised by direct election and new powers, started working full-time and became more active than the previous assemblies. Shortly after its election, the Parliament proposed that the Community adopt the flag of Europe design used by the Council of Europe. The European Council in 1984 appointed an ad hoc committee for this purpose. The European Council in 1985 largely followed the Committee's recommendations, but as the adoption of a flag was strongly reminiscent of a national flag representing statehood, was controversial, the "flag of Europe" design was adopted only with the status of a "logo" or "emblem". The European Council, or European summit, had developed since the 1960s as an informal meeting of the Council at the level of heads of state. It had originated from then-French President Charles de Gaulle's resentment at the domination of supranational institutions (e.g. the Commission) over the integration process. It was mentioned in the treaties for the first time in the Single European Act (see below). Greece re-applied to join the community on 12 June 1975, following the restoration of democracy, and joined on 1 January 1981. Following on from Greece, and after their own democratic restoration, Spain and Portugal applied to the communities in 1977 and joined together on 1 January 1986. In 1987, Turkey formally applied to join the Community and began the longest application process for any country. With the prospect of further enlargement, and a desire to increase areas of co-operation, the Single European Act was signed by the foreign ministers on 17 and 28 February 1986 in Luxembourg and The Hague respectively. In a single document it dealt with reform of institutions, extension of powers, foreign policy cooperation and the single market. It came into force on 1 July 1987. The act was followed by work on what would be the Maastricht Treaty, which was agreed on 10 December 1991, signed the following year and coming into force on 1 November 1993 establishing the European Union, and paving the way for the European Monetary Union. The EU absorbed the European Communities as one of its three pillars. The EEC's areas of activities were enlarged and were renamed the European Community, continuing to follow the supranational structure of the EEC. The EEC institutions became those of the EU, however the Court, Parliament and Commission had only limited input in the new pillars, as they worked on a more intergovernmental system than the European Communities. This was reflected in the names of the institutions, the Council was formally the "Council of the European Union" while the Commission was formally the "Commission of the European Communities". There are more competencies listed in Article 3 of the European Communities pillar than there are in Article 3 of the Treaty of Rome. This is due to the fact that some competencies were already inherent in the Treaty of Tome, some were referred to in the Treaty of Rome, and some were extended under Article 235 of the Treaty of Rome. Competencies were added to cover trans-European networks, and the work of the Culture Committee and Education Committee that were previously sharing existing competencies. The only entry in Article 3 that represented something new is the competence covering the entry and movement of persons in the internal market. However, after the Treaty of Maastricht, Parliament gained a more formal role. Maastricht brought in the codecision procedure, which gave it equal legislative power with the Council on Community matters. This replaced the informal parliamentary blocking powers established by the 1979 Isoglucose decision. It also abolished any existing state like Simple Majority voting in the EEC, replacing it with Qualified Majority Voting, a procedure more commonly used in international organisations. The Treaty of Amsterdam transferred responsibility for free movement of persons (e.g., visas, illegal immigration, asylum) from the Justice and Home Affairs (JHA) pillar to the European Community (JHA was renamed Police and Judicial Co-operation in Criminal Matters (PJCC) as a result). Both Amsterdam and the Treaty of Nice also extended codecision procedure to nearly all policy areas, giving Parliament equal power to the Council in the Community. In 2002, the Treaty of Paris which established the ECSC expired, having reached its 50-year limit (as the first treaty, it was the only one with a limit). No attempt was made to renew its mandate; instead, the Treaty of Nice transferred certain of its elements to the Treaty of Rome and hence its work continued as part of the EC area of the European Community's remit. After the entry into force of the Treaty of Lisbon in 2009 the pillar structure ceased to exist. The European Community, together with its legal personality, was absorbed into the newly consolidated European Union which merged in the other two pillars (however Euratom remained distinct). This was originally proposed under the European Constitution but that treaty failed ratification in 2005. The main aim of the EEC, as stated in its preamble, was to "preserve peace and liberty and to lay the foundations of an ever closer union among the peoples of Europe". Calling for balanced economic growth, this was to be accomplished through: Citing Article 2 from the original text of the Treaty of Rome of the 25th of March 1957, the EEC aimed at "a harmonious development of economic activities, a continuous and balanced expansion, an increase in stability, an accelerated raising of the standard of living and closer relations between the States belonging to it". Given the fear of the Cold War, many Western Europeans were afraid that poverty would make "the population vulnerable to communist propaganda" (Meurs 2018, p. 68), meaning that increasing prosperity would be beneficial to harmonise power between the Western and Eastern blocs, other than reconcile Member States such as France and Germany after WW2. The tasks entrusted to the Community were divided among an assembly, the European Parliament, Council, Commission, and Court of Justice. Moreover, restrictions to market were lifted to further liberate trade among Member States. Citizens of Member States (other than goods, services, and capital) were entitled to freedom of movement. The CAP, Common Agricultural Policy, regulated and subsided the agricultural sphere. A European Social Fund was implemented in favour of employees who lost their jobs. A European Investment Bank was established to "facilitate the economic expansion of the Community by opening up fresh resources" (Art. 3 Treaty of Rome 3/25/1957). All these implementations included overseas territories. Competition was to be kept alive to make products cheaper for European consumers. For the customs union, the treaty provided for a 10% reduction in custom duties and up to 20% of global import quotas. Progress on the customs union proceeded much faster than the twelve years planned. However, France faced some setbacks due to their war with Algeria. The six states that founded the EEC and the other two Communities were known as the "inner six" (the "outer seven" were those countries who formed the European Free Trade Association). The six were France, West Germany, Italy and the three Benelux countries: Belgium, the Netherlands and Luxembourg. The first enlargement was in 1973, with the accession of Denmark, Ireland and the United Kingdom. Greece, Spain and Portugal joined in the 1980s. The former East Germany became part of the EEC upon German reunification in 1990. Following the creation of the EU in 1993, it has enlarged to include an additional sixteen countries by 2013. Member states are represented in some form in each institution. The Council is also composed of one national minister who represents their national government. Each state also has a right to one European Commissioner each, although in the European Commission they are not supposed to represent their national interest but that of the Community. Prior to 2004, the larger members (France, Germany, Italy and the United Kingdom) have had two Commissioners. In the European Parliament, members are allocated a set number seats related to their population, however these (since 1979) have been directly elected and they sit according to political allegiance, not national origin. Most other institutions, including the European Court of Justice, have some form of national division of its members. There were three political institutions which held the executive and legislative power of the EEC, plus one judicial institution and a fifth body created in 1975. These institutions (except for the auditors) were created in 1957 by the EEC but from 1967 onwards they applied to all three Communities. The Council represents the state governments, the Parliament represents citizens and the Commission represents the European interest. Essentially, the Council, Parliament or another party place a request for legislation to the Commission. The Commission then drafts this and presents it to the Council for approval and the Parliament for an opinion (in some cases it had a veto, depending upon the legislative procedure in use). The Commission's duty is to ensure it is implemented by dealing with the day-to-day running of the Union and taking others to Court if they fail to comply. After the Maastricht Treaty in 1993, these institutions became those of the European Union, though limited in some areas due to the pillar structure. Despite this, Parliament in particular has gained more power over legislation and security of the Commission. The Court of Justice was the highest authority in the law, settling legal disputes in the Community, while the Auditors had no power but to investigate. The EEC inherited some of the Institutions of the ECSC in that the Common Assembly and Court of Justice of the ECSC had their authority extended to the EEC and Euratom in the same role. However the EEC, and Euratom, had different executive bodies to the ECSC. In place of the ECSC's Council of Ministers was the Council of the European Economic Community, and in place of the High Authority was the Commission of the European Communities. There was greater difference between these than name: the French government of the day had grown suspicious of the supranational power of the High Authority and sought to curb its powers in favour of the intergovernmental style Council. Hence the Council had a greater executive role in the running of the EEC than was the situation in the ECSC. By virtue of the Merger Treaty in 1967, the executives of the ECSC and Euratom were merged with that of the EEC, creating a single institutional structure governing the three separate Communities. From here on, the term European Communities were used for the institutions (for example, from Commission of the European Economic Community to the Commission of the European Communities). The Council of the European Communities was a body holding legislative and executive powers and was thus the main decision making body of the Community. Its Presidency rotated between the member states every six months and it is related to the European Council, which was an informal gathering of national leaders (started in 1961) on the same basis as the Council. The Council was composed of one national minister from each member state. However the Council met in various forms depending upon the topic. For example, if agriculture was being discussed, the Council would be composed of each national minister for agriculture. They represented their governments and were accountable to their national political systems. Votes were taken either by majority (with votes allocated according to population) or unanimity. In these various forms they share some legislative and budgetary power of the Parliament. Since the 1960s the Council also began to meet informally at the level of heads of government and heads of state; these European summits followed the same presidency system and secretariat as the Council but was not a formal formation of it. The Commission of the European Communities was the executive arm of the community, drafting Community law, dealing with the day to running of the Community and upholding the treaties. It was designed to be independent, representing the interest of the Community as a whole. Every member state submitted one commissioner (two from each of the larger states, one from the smaller states). One of its members was the President, appointed by the Council, who chaired the body and represented it. Under the Community, the European Parliament (formerly the European Parliamentary Assembly) had an advisory role to the Council and Commission. There were a number of Community legislative procedures, at first there was only the consultation procedure, which meant Parliament had to be consulted, although it was often ignored. The Single European Act gave Parliament more power, with the assent procedure giving it a right to veto proposals and the cooperation procedure giving it equal power with the Council if the Council was not unanimous. In 1970 and 1975, the Budgetary treaties gave Parliament power over the Community budget. The Parliament's members, up-until 1980 were national MPs serving part-time in the Parliament. The Treaties of Rome had required elections to be held once the Council had decided on a voting system, but this did not happen and elections were delayed until 1979 (see 1979 European Parliament election). After that, Parliament was elected every five years. In the following 20 years, it gradually won co-decision powers with the Council over the adoption of legislation, the right to approve or reject the appointment of the Commission President and the Commission as a whole, and the right to approve or reject international agreements entered into by the Community. The Court of Justice of the European Communities was the highest court of on matters of Community law and was composed of one judge per state with a president elected from among them. Its role was to ensure that Community law was applied in the same way across all states and to settle legal disputes between institutions or states. It became a powerful institution as Community law overrides national law. The fifth institution is the European Court of Auditors. Its ensured that taxpayer funds from the Community budget had been correctly spent by the Community's institutions. The ECA provided an audit report for each financial year to the Council and Parliament and gave opinions and proposals on financial legislation and anti-fraud actions. It is the only institution not mentioned in the original treaties, having been set up in 1975. At the time of its abolition, the European Community pillar covered the following areas; Since the end of World War II, sovereign European countries have entered into treaties and thereby co-operated and harmonised policies (or pooled sovereignty) in an increasing number of areas, in the European integration project or the construction of Europe (French: la construction européenne). The following timeline outlines the legal inception of the European Union (EU)—the principal framework for this unification. The EU inherited many of its present responsibilities from the European Communities (EC), which were founded in the 1950s in the spirit of the Schuman Declaration.
[ { "paragraph_id": 0, "text": "The European Economic Community (EEC) was a regional organisation created by the Treaty of Rome of 1957, aiming to foster economic integration among its member states. It was subsequently renamed the European Community (EC) upon becoming integrated into the first pillar of the newly formed European Union in 1993. In the popular language, however, the singular European Community was sometimes inaccurately used in the wider sense of the plural European Communities, in spite of the latter designation covering all the three constituent entities of the first pillar.", "title": "" }, { "paragraph_id": 1, "text": "In 2009, the EC formally ceased to exist and its institutions were directly absorbed by the EU. This made the Union the formal successor institution of the Community.", "title": "" }, { "paragraph_id": 2, "text": "The Community's initial aim was to bring about economic integration, including a common market and customs union, among its six founding members: Belgium, France, Italy, Luxembourg, the Netherlands and West Germany. It gained a common set of institutions along with the European Coal and Steel Community (ECSC) and the European Atomic Energy Community (EURATOM) as one of the European Communities under the 1965 Merger Treaty (Treaty of Brussels). In 1993 a complete single market was achieved, known as the internal market, which allowed for the free movement of goods, capital, services, and people within the EEC. In 1994 the internal market was formalised by the EEA agreement. This agreement also extended the internal market to include most of the member states of the European Free Trade Association, forming the European Economic Area, which encompasses 15 countries.", "title": "" }, { "paragraph_id": 3, "text": "Upon the entry into force of the Maastricht Treaty in 1993, the EEC was renamed the European Community to reflect that it covered a wider range than economic policy. This was also when the three European Communities, including the EC, were collectively made to constitute the first of the three pillars of the European Union, which the treaty also founded. The EC existed in this form until it was abolished by the 2009 Treaty of Lisbon, which incorporated the EC's institutions into the EU's wider framework and provided that the EU would \"replace and succeed the European Community\".", "title": "" }, { "paragraph_id": 4, "text": "The EEC was also known as the European Common Market in the English-speaking countries and sometimes referred to as the European Community even before it was officially renamed as such in 1993.", "title": "" }, { "paragraph_id": 5, "text": "In April 1951, the Treaty of Paris was signed, creating the European Coal and Steel Community (ECSC). This was an international community based on supranationalism and international law, designed to help the economy of Europe and prevent future war by integrating its members.", "title": "History" }, { "paragraph_id": 6, "text": "With the aim of creating a federal Europe two further communities were proposed: a European Defence Community and a European Political Community. While the treaty for the latter was being drawn up by the Common Assembly, the ECSC parliamentary chamber, the proposed defence community was rejected by the French Parliament. ECSC President Jean Monnet, a leading figure behind the communities, resigned from the High Authority in protest and began work on alternative communities, based on economic integration rather than political integration. Following the Messina Conference in 1955, Paul-Henri Spaak was given the task to prepare a report on the idea of a customs union. The so-called Spaak Report of the Spaak Committee formed the cornerstone of the intergovernmental negotiations at Val Duchesse conference centre in 1956. Together with the Ohlin Report the Spaak Report would provide the basis for the Treaty of Rome.", "title": "History" }, { "paragraph_id": 7, "text": "In 1956, Paul-Henri Spaak led the Intergovernmental Conference on the Common Market and Euratom at the Val Duchesse conference centre, which prepared for the Treaty of Rome in 1957. The conference led to the signature, on 25 March 1957, of the Treaty of Rome establishing a European Economic Community.", "title": "History" }, { "paragraph_id": 8, "text": "The resulting communities were the European Economic Community (EEC) and the European Atomic Energy Community (EURATOM or sometimes EAEC). These were markedly less supranational than the previous communities, due to protests from some countries that their sovereignty was being infringed (however there would still be concerns with the behaviour of the Hallstein Commission). Germany became a founding member of the EEC, and Konrad Adenauer was made leader in a very short time. The first formal meeting of the Hallstein Commission was held on 16 January 1958 at the Chateau de Val-Duchesse. The EEC (direct ancestor of the modern Community) was to create a customs union while Euratom would promote co-operation in the nuclear power sphere. The EEC rapidly became the most important of these and expanded its activities. The first move towards political developments came at the end of 1959 when the foreign ministers of the six members announced that would be meeting quarterly to discuss political issues and international problems. One of the first important accomplishments of the EEC was the establishment (1962) of common price levels for agricultural products. In 1968, internal tariffs (tariffs on trade between member nations) were removed on certain products.", "title": "History" }, { "paragraph_id": 9, "text": "Another crisis was triggered in regard to proposals for the financing of the Common Agricultural Policy, which came into force in 1962. The transitional period whereby decisions were made by unanimity had come to an end, and majority-voting in the council had taken effect. Then-French President Charles de Gaulle's opposition to supranationalism and fear of the other members challenging the CAP led to an \"empty chair policy\" whereby French representatives were withdrawn from the European institutions until the French veto was reinstated. Eventually, a compromise was reached with the Luxembourg compromise on 29 January 1966 whereby a gentlemen's agreement permitted members to use a veto on areas of national interest.", "title": "History" }, { "paragraph_id": 10, "text": "On 1 July 1967 when the Merger Treaty came into operation, combining the institutions of the ECSC and Euratom into that of the EEC, they already shared a Parliamentary Assembly and Courts. Collectively they were known as the European Communities. The Communities still had independent personalities although were increasingly integrated. Future treaties granted the community new powers beyond simple economic matters which had achieved a high level of integration. As it got closer to the goal of political integration and a peaceful and united Europe, what Mikhail Gorbachev described as a Common European Home.", "title": "History" }, { "paragraph_id": 11, "text": "The 1960s saw the first attempts at enlargement. In 1961, Denmark, Ireland, the United Kingdom and Norway (in 1962), applied to join the three Communities. However, President Charles de Gaulle saw British membership as a Trojan Horse for U.S. influence and vetoed membership, and the applications of all four countries were suspended. Greece became the first country to join the EC in 1961 as an associate member, however its membership was suspended in 1967 after a coup d'état established a military dictatorship called the Regime of the Colonels.", "title": "History" }, { "paragraph_id": 12, "text": "A year later, in February 1962, Spain attempted to join the European Community. However, because Francoist Spain was not a democracy, all members rejected the request in 1964.", "title": "History" }, { "paragraph_id": 13, "text": "The four countries resubmitted their applications on 11 May 1967 and with Georges Pompidou succeeding Charles de Gaulle as French president in 1969, the veto was lifted. Negotiations began in 1970 under the pro-European UK government of Edward Heath, who had to deal with disagreements relating to the Common Agricultural Policy and the UK's relationship with the Commonwealth of Nations. Nevertheless, two years later the accession treaties were signed so that Denmark, Ireland and the UK joined the Community effective 1 January 1973. The Norwegian people had rejected membership in a referendum on 25 September 1972.", "title": "History" }, { "paragraph_id": 14, "text": "The Treaties of Rome had stated that the European Parliament must be directly elected, however this required the Council to agree on a common voting system first. The Council procrastinated on the issue and the Parliament remained appointed, French President Charles de Gaulle was particularly active in blocking the development of the Parliament, with it only being granted Budgetary powers following his resignation.", "title": "History" }, { "paragraph_id": 15, "text": "Parliament pressured for agreement and on 20 September 1976 the Council agreed part of the necessary instruments for election, deferring details on electoral systems which remain varied to this day. During the tenure of President Jenkins, in June 1979, the elections were held in all the then-members (see 1979 European Parliament election). The new Parliament, galvanised by direct election and new powers, started working full-time and became more active than the previous assemblies.", "title": "History" }, { "paragraph_id": 16, "text": "Shortly after its election, the Parliament proposed that the Community adopt the flag of Europe design used by the Council of Europe. The European Council in 1984 appointed an ad hoc committee for this purpose. The European Council in 1985 largely followed the Committee's recommendations, but as the adoption of a flag was strongly reminiscent of a national flag representing statehood, was controversial, the \"flag of Europe\" design was adopted only with the status of a \"logo\" or \"emblem\".", "title": "History" }, { "paragraph_id": 17, "text": "The European Council, or European summit, had developed since the 1960s as an informal meeting of the Council at the level of heads of state. It had originated from then-French President Charles de Gaulle's resentment at the domination of supranational institutions (e.g. the Commission) over the integration process. It was mentioned in the treaties for the first time in the Single European Act (see below).", "title": "History" }, { "paragraph_id": 18, "text": "", "title": "History" }, { "paragraph_id": 19, "text": "Greece re-applied to join the community on 12 June 1975, following the restoration of democracy, and joined on 1 January 1981. Following on from Greece, and after their own democratic restoration, Spain and Portugal applied to the communities in 1977 and joined together on 1 January 1986. In 1987, Turkey formally applied to join the Community and began the longest application process for any country.", "title": "History" }, { "paragraph_id": 20, "text": "With the prospect of further enlargement, and a desire to increase areas of co-operation, the Single European Act was signed by the foreign ministers on 17 and 28 February 1986 in Luxembourg and The Hague respectively. In a single document it dealt with reform of institutions, extension of powers, foreign policy cooperation and the single market. It came into force on 1 July 1987. The act was followed by work on what would be the Maastricht Treaty, which was agreed on 10 December 1991, signed the following year and coming into force on 1 November 1993 establishing the European Union, and paving the way for the European Monetary Union.", "title": "History" }, { "paragraph_id": 21, "text": "The EU absorbed the European Communities as one of its three pillars. The EEC's areas of activities were enlarged and were renamed the European Community, continuing to follow the supranational structure of the EEC. The EEC institutions became those of the EU, however the Court, Parliament and Commission had only limited input in the new pillars, as they worked on a more intergovernmental system than the European Communities. This was reflected in the names of the institutions, the Council was formally the \"Council of the European Union\" while the Commission was formally the \"Commission of the European Communities\".", "title": "History" }, { "paragraph_id": 22, "text": "There are more competencies listed in Article 3 of the European Communities pillar than there are in Article 3 of the Treaty of Rome. This is due to the fact that some competencies were already inherent in the Treaty of Tome, some were referred to in the Treaty of Rome, and some were extended under Article 235 of the Treaty of Rome. Competencies were added to cover trans-European networks, and the work of the Culture Committee and Education Committee that were previously sharing existing competencies. The only entry in Article 3 that represented something new is the competence covering the entry and movement of persons in the internal market.", "title": "History" }, { "paragraph_id": 23, "text": "However, after the Treaty of Maastricht, Parliament gained a more formal role. Maastricht brought in the codecision procedure, which gave it equal legislative power with the Council on Community matters. This replaced the informal parliamentary blocking powers established by the 1979 Isoglucose decision.", "title": "History" }, { "paragraph_id": 24, "text": "It also abolished any existing state like Simple Majority voting in the EEC, replacing it with Qualified Majority Voting, a procedure more commonly used in international organisations.", "title": "History" }, { "paragraph_id": 25, "text": "The Treaty of Amsterdam transferred responsibility for free movement of persons (e.g., visas, illegal immigration, asylum) from the Justice and Home Affairs (JHA) pillar to the European Community (JHA was renamed Police and Judicial Co-operation in Criminal Matters (PJCC) as a result). Both Amsterdam and the Treaty of Nice also extended codecision procedure to nearly all policy areas, giving Parliament equal power to the Council in the Community.", "title": "History" }, { "paragraph_id": 26, "text": "In 2002, the Treaty of Paris which established the ECSC expired, having reached its 50-year limit (as the first treaty, it was the only one with a limit). No attempt was made to renew its mandate; instead, the Treaty of Nice transferred certain of its elements to the Treaty of Rome and hence its work continued as part of the EC area of the European Community's remit.", "title": "History" }, { "paragraph_id": 27, "text": "After the entry into force of the Treaty of Lisbon in 2009 the pillar structure ceased to exist. The European Community, together with its legal personality, was absorbed into the newly consolidated European Union which merged in the other two pillars (however Euratom remained distinct). This was originally proposed under the European Constitution but that treaty failed ratification in 2005.", "title": "History" }, { "paragraph_id": 28, "text": "The main aim of the EEC, as stated in its preamble, was to \"preserve peace and liberty and to lay the foundations of an ever closer union among the peoples of Europe\". Calling for balanced economic growth, this was to be accomplished through:", "title": "Aims and achievements" }, { "paragraph_id": 29, "text": "Citing Article 2 from the original text of the Treaty of Rome of the 25th of March 1957, the EEC aimed at \"a harmonious development of economic activities, a continuous and balanced expansion, an increase in stability, an accelerated raising of the standard of living and closer relations between the States belonging to it\". Given the fear of the Cold War, many Western Europeans were afraid that poverty would make \"the population vulnerable to communist propaganda\" (Meurs 2018, p. 68), meaning that increasing prosperity would be beneficial to harmonise power between the Western and Eastern blocs, other than reconcile Member States such as France and Germany after WW2. The tasks entrusted to the Community were divided among an assembly, the European Parliament, Council, Commission, and Court of Justice. Moreover, restrictions to market were lifted to further liberate trade among Member States. Citizens of Member States (other than goods, services, and capital) were entitled to freedom of movement. The CAP, Common Agricultural Policy, regulated and subsided the agricultural sphere. A European Social Fund was implemented in favour of employees who lost their jobs. A European Investment Bank was established to \"facilitate the economic expansion of the Community by opening up fresh resources\" (Art. 3 Treaty of Rome 3/25/1957). All these implementations included overseas territories. Competition was to be kept alive to make products cheaper for European consumers.", "title": "Aims and achievements" }, { "paragraph_id": 30, "text": "For the customs union, the treaty provided for a 10% reduction in custom duties and up to 20% of global import quotas. Progress on the customs union proceeded much faster than the twelve years planned. However, France faced some setbacks due to their war with Algeria.", "title": "Aims and achievements" }, { "paragraph_id": 31, "text": "The six states that founded the EEC and the other two Communities were known as the \"inner six\" (the \"outer seven\" were those countries who formed the European Free Trade Association). The six were France, West Germany, Italy and the three Benelux countries: Belgium, the Netherlands and Luxembourg. The first enlargement was in 1973, with the accession of Denmark, Ireland and the United Kingdom. Greece, Spain and Portugal joined in the 1980s. The former East Germany became part of the EEC upon German reunification in 1990. Following the creation of the EU in 1993, it has enlarged to include an additional sixteen countries by 2013.", "title": "Members" }, { "paragraph_id": 32, "text": "Member states are represented in some form in each institution. The Council is also composed of one national minister who represents their national government. Each state also has a right to one European Commissioner each, although in the European Commission they are not supposed to represent their national interest but that of the Community. Prior to 2004, the larger members (France, Germany, Italy and the United Kingdom) have had two Commissioners. In the European Parliament, members are allocated a set number seats related to their population, however these (since 1979) have been directly elected and they sit according to political allegiance, not national origin. Most other institutions, including the European Court of Justice, have some form of national division of its members.", "title": "Members" }, { "paragraph_id": 33, "text": "There were three political institutions which held the executive and legislative power of the EEC, plus one judicial institution and a fifth body created in 1975. These institutions (except for the auditors) were created in 1957 by the EEC but from 1967 onwards they applied to all three Communities. The Council represents the state governments, the Parliament represents citizens and the Commission represents the European interest. Essentially, the Council, Parliament or another party place a request for legislation to the Commission. The Commission then drafts this and presents it to the Council for approval and the Parliament for an opinion (in some cases it had a veto, depending upon the legislative procedure in use). The Commission's duty is to ensure it is implemented by dealing with the day-to-day running of the Union and taking others to Court if they fail to comply. After the Maastricht Treaty in 1993, these institutions became those of the European Union, though limited in some areas due to the pillar structure. Despite this, Parliament in particular has gained more power over legislation and security of the Commission. The Court of Justice was the highest authority in the law, settling legal disputes in the Community, while the Auditors had no power but to investigate.", "title": "Institutions" }, { "paragraph_id": 34, "text": "The EEC inherited some of the Institutions of the ECSC in that the Common Assembly and Court of Justice of the ECSC had their authority extended to the EEC and Euratom in the same role. However the EEC, and Euratom, had different executive bodies to the ECSC. In place of the ECSC's Council of Ministers was the Council of the European Economic Community, and in place of the High Authority was the Commission of the European Communities.", "title": "Institutions" }, { "paragraph_id": 35, "text": "There was greater difference between these than name: the French government of the day had grown suspicious of the supranational power of the High Authority and sought to curb its powers in favour of the intergovernmental style Council. Hence the Council had a greater executive role in the running of the EEC than was the situation in the ECSC. By virtue of the Merger Treaty in 1967, the executives of the ECSC and Euratom were merged with that of the EEC, creating a single institutional structure governing the three separate Communities. From here on, the term European Communities were used for the institutions (for example, from Commission of the European Economic Community to the Commission of the European Communities).", "title": "Institutions" }, { "paragraph_id": 36, "text": "The Council of the European Communities was a body holding legislative and executive powers and was thus the main decision making body of the Community. Its Presidency rotated between the member states every six months and it is related to the European Council, which was an informal gathering of national leaders (started in 1961) on the same basis as the Council.", "title": "Institutions" }, { "paragraph_id": 37, "text": "The Council was composed of one national minister from each member state. However the Council met in various forms depending upon the topic. For example, if agriculture was being discussed, the Council would be composed of each national minister for agriculture. They represented their governments and were accountable to their national political systems. Votes were taken either by majority (with votes allocated according to population) or unanimity. In these various forms they share some legislative and budgetary power of the Parliament. Since the 1960s the Council also began to meet informally at the level of heads of government and heads of state; these European summits followed the same presidency system and secretariat as the Council but was not a formal formation of it.", "title": "Institutions" }, { "paragraph_id": 38, "text": "The Commission of the European Communities was the executive arm of the community, drafting Community law, dealing with the day to running of the Community and upholding the treaties. It was designed to be independent, representing the interest of the Community as a whole. Every member state submitted one commissioner (two from each of the larger states, one from the smaller states). One of its members was the President, appointed by the Council, who chaired the body and represented it.", "title": "Institutions" }, { "paragraph_id": 39, "text": "Under the Community, the European Parliament (formerly the European Parliamentary Assembly) had an advisory role to the Council and Commission. There were a number of Community legislative procedures, at first there was only the consultation procedure, which meant Parliament had to be consulted, although it was often ignored. The Single European Act gave Parliament more power, with the assent procedure giving it a right to veto proposals and the cooperation procedure giving it equal power with the Council if the Council was not unanimous.", "title": "Institutions" }, { "paragraph_id": 40, "text": "In 1970 and 1975, the Budgetary treaties gave Parliament power over the Community budget. The Parliament's members, up-until 1980 were national MPs serving part-time in the Parliament. The Treaties of Rome had required elections to be held once the Council had decided on a voting system, but this did not happen and elections were delayed until 1979 (see 1979 European Parliament election). After that, Parliament was elected every five years. In the following 20 years, it gradually won co-decision powers with the Council over the adoption of legislation, the right to approve or reject the appointment of the Commission President and the Commission as a whole, and the right to approve or reject international agreements entered into by the Community.", "title": "Institutions" }, { "paragraph_id": 41, "text": "The Court of Justice of the European Communities was the highest court of on matters of Community law and was composed of one judge per state with a president elected from among them. Its role was to ensure that Community law was applied in the same way across all states and to settle legal disputes between institutions or states. It became a powerful institution as Community law overrides national law.", "title": "Institutions" }, { "paragraph_id": 42, "text": "The fifth institution is the European Court of Auditors. Its ensured that taxpayer funds from the Community budget had been correctly spent by the Community's institutions. The ECA provided an audit report for each financial year to the Council and Parliament and gave opinions and proposals on financial legislation and anti-fraud actions. It is the only institution not mentioned in the original treaties, having been set up in 1975.", "title": "Institutions" }, { "paragraph_id": 43, "text": "At the time of its abolition, the European Community pillar covered the following areas;", "title": "Policy areas" }, { "paragraph_id": 44, "text": "Since the end of World War II, sovereign European countries have entered into treaties and thereby co-operated and harmonised policies (or pooled sovereignty) in an increasing number of areas, in the European integration project or the construction of Europe (French: la construction européenne). The following timeline outlines the legal inception of the European Union (EU)—the principal framework for this unification. The EU inherited many of its present responsibilities from the European Communities (EC), which were founded in the 1950s in the spirit of the Schuman Declaration.", "title": "EU evolution timeline" }, { "paragraph_id": 45, "text": "", "title": "EU evolution timeline" } ]
The European Economic Community (EEC) was a regional organisation created by the Treaty of Rome of 1957, aiming to foster economic integration among its member states. It was subsequently renamed the European Community (EC) upon becoming integrated into the first pillar of the newly formed European Union in 1993. In the popular language, however, the singular European Community was sometimes inaccurately used in the wider sense of the plural European Communities, in spite of the latter designation covering all the three constituent entities of the first pillar. In 2009, the EC formally ceased to exist and its institutions were directly absorbed by the EU. This made the Union the formal successor institution of the Community. The Community's initial aim was to bring about economic integration, including a common market and customs union, among its six founding members: Belgium, France, Italy, Luxembourg, the Netherlands and West Germany. It gained a common set of institutions along with the European Coal and Steel Community (ECSC) and the European Atomic Energy Community (EURATOM) as one of the European Communities under the 1965 Merger Treaty. In 1993 a complete single market was achieved, known as the internal market, which allowed for the free movement of goods, capital, services, and people within the EEC. In 1994 the internal market was formalised by the EEA agreement. This agreement also extended the internal market to include most of the member states of the European Free Trade Association, forming the European Economic Area, which encompasses 15 countries. Upon the entry into force of the Maastricht Treaty in 1993, the EEC was renamed the European Community to reflect that it covered a wider range than economic policy. This was also when the three European Communities, including the EC, were collectively made to constitute the first of the three pillars of the European Union, which the treaty also founded. The EC existed in this form until it was abolished by the 2009 Treaty of Lisbon, which incorporated the EC's institutions into the EU's wider framework and provided that the EU would "replace and succeed the European Community". The EEC was also known as the European Common Market in the English-speaking countries and sometimes referred to as the European Community even before it was officially renamed as such in 1993.
2002-02-02T16:55:40Z
2023-12-14T17:54:14Z
[ "Template:EU evolution timeline", "Template:ISBN", "Template:Flagdeco", "Template:Nts", "Template:Unordered list", "Template:Webarchive", "Template:About-distinguish-text", "Template:Infobox country", "Template:Further", "Template:Cite web", "Template:Reflist", "Template:OCLC", "Template:Orders, decorations, and medals of the European Union", "Template:Expand section", "Template:Dts", "Template:Commons category", "Template:European Union topics", "Template:Use British English", "Template:Citation needed", "Template:Cite encyclopedia", "Template:Refend", "Template:Short description", "Template:Redirect-multi", "Template:EU history", "Template:NoteFoot", "Template:Cite book", "Template:Refbegin", "Template:Use dmy dates", "Template:Legend", "Template:Authority control" ]
https://en.wikipedia.org/wiki/European_Economic_Community
9,579
EFTA (disambiguation)
EFTA is the European Free Trade Association, a trade organisation and free trade area. EFTA may also refer to:
[ { "paragraph_id": 0, "text": "EFTA is the European Free Trade Association, a trade organisation and free trade area.", "title": "" }, { "paragraph_id": 1, "text": "EFTA may also refer to:", "title": "" } ]
EFTA is the European Free Trade Association, a trade organisation and free trade area. EFTA may also refer to: European Fair Trade Association, an association of eleven fair trade importers European Federation of Taiwanese Associations
2016-08-17T13:28:19Z
[ "Template:Wiktionary", "Template:Look from", "Template:In title", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/EFTA_(disambiguation)
9,580
European Free Trade Association
The European Free Trade Association (EFTA) is a regional trade organization and free trade area consisting of four European states: Iceland, Liechtenstein, Norway and Switzerland. The organization operates in parallel with the European Union (EU), and all four member states participate in the European Single Market and are part of the Schengen Area. They are not, however, party to the European Union Customs Union. EFTA was historically one of the two dominant western European trade blocs, but is now much smaller and closely associated with its historical competitor, the European Union. It was established on 3 May 1960 to serve as an alternative trade bloc for those European states that were unable or unwilling to join the then European Economic Community (EEC), the main predecessor of the EU. The Stockholm Convention (1960), to establish the EFTA, was signed on 4 January 1960 in the Swedish capital by seven countries (known as the "outer seven": Austria, Denmark, Norway, Portugal, Sweden, Switzerland and the United Kingdom). A revised Convention, the Vaduz Convention, was signed on 21 June 2001 and entered into force on 1 June 2002. Since 1995, only two founding members remain, namely Norway and Switzerland. The other five, Austria, Denmark, Portugal, Sweden and the United Kingdom, had joined the EU at some point in the intervening years. The initial Stockholm Convention was superseded by the Vaduz Convention, which aimed to provide a successful framework for continuing the expansion and liberalization of trade, both among the organization's member states and with the rest of the world. Whilst the EFTA is not a customs union and member states have full rights to enter into bilateral third-country trade arrangements, it does have a coordinated trade policy. As a result, its member states have jointly concluded free trade agreements with the EU and a number of other countries. To participate in the EU's single market, Iceland, Liechtenstein, and Norway are parties to the Agreement on a European Economic Area (EEA), with compliances regulated by the EFTA Surveillance Authority and the EFTA Court. Switzerland has a set of multilateral agreements with the EU and its member states instead. On 12 January 1960, the Convention establishing the European Free Trade Association was initiated in the Golden Hall of the Stockholm City Hall. This established the progressive elimination of customs duties on industrial products, but did not affect agricultural or fisheries products. The main difference between the early EEC and the EFTA was that the latter did not operate common external customs tariffs unlike the former: each EFTA member was free to establish its individual customs duties against, or its individual free trade agreements with, non-EFTA countries. The founding members of the EFTA were: Austria, Denmark, Norway, Portugal, Sweden, Switzerland and the United Kingdom. During the 1960s, these countries were often referred to as the "Outer Seven", as opposed to the Inner Six of the then European Economic Community (EEC). Finland became an associate member in 1961 and a full member in 1986, and Iceland joined in 1970. The United Kingdom and Denmark joined the EEC in 1973 and hence ceased to be EFTA members. Portugal also left EFTA for the European Community in 1986. Liechtenstein joined the EFTA in 1991 (previously its interests had been represented by Switzerland). Austria, Sweden, and Finland joined the EU in 1995 and thus ceased to be EFTA members. Twice, in 1972 and in 1994, the Norwegian government had tried to join the EU (still the EEC, in 1973) and by doing so, leave the EFTA. However, both the times, the membership of the EU was rejected in national referendums, keeping Norway in the EFTA. Iceland applied for EU membership in 2009 due to the 2008–2011 Icelandic financial crisis, but has since dropped its bid. Between 1994 and 2011, EFTA memberships for Andorra, San Marino, Monaco, the Isle of Man, Turkey, Israel, Morocco, and other European Neighbourhood Policy partners were discussed. In November 2012, after the Council of the European Union had called for an evaluation of the EU's relations with Andorra, Monaco, and San Marino, which they described as "fragmented", the European Commission published a report outlining the options for their further integration into the EU. Unlike Liechtenstein, which is a member of the EEA via the EFTA and the Schengen Agreement, relations with these three states are based on a collection of agreements covering specific issues. The report examined four alternatives to the current situation: However, the Commission argued that the sectoral approach did not address the major issues and was still needlessly complicated, while EU membership was dismissed in the near future because "the EU institutions are currently not adapted to the accession of such small-sized countries". The remaining options, EEA membership and a FAA with the states, were found to be viable and were recommended by the commission. In response, the Council requested that negotiations with the three microstates on further integration continue, and that a report be prepared by the end of 2013 detailing the implications of the two viable alternatives and recommendations on how to proceed. As EEA membership is currently only open to EFTA or EU member states, the consent of existing EFTA member states is required for the microstates to join the EEA without becoming members of the EU. In 2011, Jonas Gahr Støre, then Foreign Minister of Norway which is an EFTA member state, said that EFTA/EEA membership for the microstates was not the appropriate mechanism for their integration into the internal market due to their different requirements from those of larger countries such as Norway, and suggested that a simplified association would be better suited for them. Espen Barth Eide, Støre's successor, responded to the commission's report in late 2012 by questioning whether the microstates have sufficient administrative capabilities to meet the obligations of EEA membership. However, he stated that Norway would be open to the possibility of EFTA membership for the microstates if they decided to submit an application, and that the country had not made a final decision on the matter. Pascal Schafhauser, the Counsellor of the Liechtenstein Mission to the EU, said that Liechtenstein, another EFTA member state, was willing to discuss EEA membership for the microstates provided their joining did not impede the functioning of the organization. However, he suggested that the option of direct membership in the EEA for the microstates, outside of both the EFTA and the EU, should be considered. On 18 November 2013, the EU Commission concluded that "the participation of the small-sized countries in the EEA is not judged to be a viable option at present due to the political and institutional reasons", and that Association Agreements were a more feasible mechanism to integrate the microstates into the internal market. The Norwegian electorate had rejected treaties of accession to the EU in two referendums. At the time of the first referendum in 1972, their neighbour, Denmark joined. Since the second referendum in 1994, two other Nordic neighbours, Sweden and Finland, have joined the EU. The last two governments of Norway have not advanced the question, as they have both been coalition governments consisting of proponents and opponents of EU membership. Since Switzerland rejected the EEA membership in a referendum in 1992, more referendums on EU membership have been initiated, the last time being in 2001. These were all rejected. Switzerland has been in a customs union with fellow EFTA member state and neighbour Liechtenstein since 1924. On 16 July 2009, the government of Iceland formally applied for EU membership, but the negotiation process was suspended in mid-2013, and in 2015 the foreign ministers wrote to withdraw its application. Denmark was a founding member of EFTA in 1960, but its membership ended in 1973, when it joined the European Communities. The autonomous territories of the Kingdom of Denmark were covered by Denmark's EFTA membership: Greenland from 1961 and the Faroe Islands from 1968. In mid-2005, representatives of the Faroe Islands raised the possibility of their territory re-joining the EFTA. Because Article 56 of the EFTA Convention only allows sovereign states to become members of the EFTA, the Faroes considered the possibility that the "Kingdom of Denmark in respect of the Faroes" could join the EFTA on their behalf. The Danish Government has stated that this mechanism would not allow the Faroes to become a member of the EEA because Denmark was already a party to the EEA Agreement. The Faroes already have an extensive bilateral free trade agreement with Iceland, known as the Hoyvík Agreement. The United Kingdom was a co-founder of EFTA in 1960, but ceased to be a member upon joining the European Economic Community. The country held a referendum in 2016 on withdrawing from the EU (popularly referred to as "Brexit"), resulting in a 51.9% vote in favour of withdrawing. A 2013 research paper presented to the Parliament of the United Kingdom proposed a number of alternatives to EU membership which would continue to allow it access to the EU's internal market, including continuing EEA membership as an EFTA member state, or the Swiss model of a number of bilateral treaties covering the provisions of the single market. In the first meeting since the Brexit vote, EFTA reacted by saying both that they were open to a UK return, and that Britain has many issues to work through. The president of Switzerland Johann Schneider-Ammann stated that its return would strengthen the association. However, in August 2016 the Norwegian Government expressed reservations. Norway's European affairs minister, Elisabeth Vik Aspaker, told the Aftenposten newspaper: "It's not certain that it would be a good idea to let a big country into this organization. It would shift the balance, which is not necessarily in Norway's interests." In late 2016, the Scottish First Minister said that her priority was to keep the whole of the UK in the European single market but that taking Scotland alone into the EEA was an option being "looked at". However, other EFTA states have stated that only sovereign states are eligible for membership, so it could only join if it became independent from the UK, unless the solution scouted for the Faroes in 2005 were to be adopted (see above). In early 2018, British MPs Antoinette Sandbach, Stephen Kinnock and Stephen Hammond all called for the UK to rejoin EFTA. In 1992, the EU, its member states, and the EFTA member states signed the Agreement on the European Economic Area in Porto, Portugal. However, the proposal that Switzerland ratify its participation was rejected by referendum. (Nevertheless, Switzerland has multiple bilateral treaties with the EU that allow it to participate in the European Single Market, the Schengen Agreement and other programmes). Thus, except for Switzerland, the EFTA members are also members of the European Economic Area (EEA). The EEA comprises three member states of the European Free Trade Association (EFTA) and 27 member states of the European Union (EU), including Croatia which the agreement is provisionally applied to, pending its ratification by all contracting parties. It was established on 1 January 1994 following an agreement with the European Economic Community (which had become the European Community two months earlier). It allows the EFTA-EEA states to participate in the EU's Internal Market without being members of the EU. They adopt almost all EU legislation related to the single market, except laws on agriculture and fisheries. However, they also contribute to and influence the formation of new EEA relevant policies and legislation at an early stage as part of a formal decision-shaping process. One EFTA member, Switzerland, has not joined the EEA but has a series of bilateral agreements, including a free trade agreement, with the EU. The following table summarises the various components of EU laws applied in the EFTA countries and their sovereign territories. Some territories of EU member states also have a special status in regard to EU laws applied as is the case with some European microstates. A Joint Committee consisting of the EEA-EFTA States plus the European Commission (representing the EU) has the function of extending relevant EU law to the non EU members. An EEA Council meets twice yearly to govern the overall relationship between the EEA members. Rather than setting up pan-EEA institutions, the activities of the EEA are regulated by the EFTA Surveillance Authority and the EFTA Court. The EFTA Surveillance Authority and the EFTA Court regulate the activities of the EFTA members in respect of their obligations in the European Economic Area (EEA). Since Switzerland is not an EEA member, it does not participate in these institutions. The EFTA Surveillance Authority performs a role for EFTA members that is equivalent to that of the European Commission for the EU, as "guardian of the treaties" and the EFTA Court performs the European Court of Justice-equivalent role. The original plan for the EEA lacked the EFTA Court: the European Court of Justice was to exercise those roles. However, during the negotiations for the EEA agreement, the European Court of Justice ruled by the Opinion 1/91 that it would be a violation of the treaties to give to the EU institutions these powers with respect to non-EU member states. Therefore, the current arrangement was developed instead. The EEA and Norway Grants are the financial contributions of Iceland, Liechtenstein and Norway to reduce social and economic disparities in Europe. They were established in conjunction with the 2004 enlargement of the European Economic Area (EEA), which brought together the EU, Iceland, Liechtenstein and Norway in the Internal Market. In the period from 2004 to 2009, €1.3 billion of project funding was made available for project funding in the 15 beneficiary states in Central and Southern Europe. The EEA and Norway Grants are administered by the Financial Mechanism Office, which is affiliated to the EFTA Secretariat in Brussels. EFTA also originated the Hallmarking Convention and the Pharmaceutical Inspection Convention, both of which are open to non-EFTA states. EFTA has 29 free trade agreements with non-EU countries as well as declarations on cooperation and joint workgroups to improve trade. Currently, the EFTA States have established preferential trade relations with 40 states and territories, in addition to the 27 member states of the European Union. EFTA's interactive Free Trade Map gives an overview of the partners worldwide. Negotiations currently on hold Declarations on cooperation The following agreements are no longer active: EFTA member states' citizens enjoy freedom of movement in each other's territories in accordance with the EFTA convention. EFTA & EEA nationals also enjoy freedom of movement in the European Union (EU). EFTA nationals and EU citizens are not only visa-exempt but are legally entitled to enter and reside in each other's countries. The Citizens' Rights Directive (also sometimes called the "Free Movement Directive") defines the right of free movement for citizens of the European Economic Area (EEA), which includes the three EFTA members Iceland, Norway and Liechtenstein plus the member states of the EU. Switzerland, which is a member of EFTA but not of the EEA, is not bound by the Directive but rather has a separate multilateral agreement on free movement with the EU and its member states. As a result, a citizen of an EFTA country can live and work in all the other EFTA countries and in all the EU countries, and a citizen of an EU country can live and work in all the EFTA countries (but for voting and working in sensitive fields, such as government / police / military, citizenship is often required, and non-citizens may not have the same rights to welfare and unemployment benefits as citizens). The Portugal Fund came into operation in February 1977 when Portugal was still a member of EFTA. It was to provide funding for the development of Portugal after the Carnation Revolution and the consequential restoration of democracy and the decolonization of the country's overseas possessions. This followed a period of economic sanctions by most of the international community, which left Portugal economically underdeveloped compared to the rest of the western Europe. When Portugal left EFTA in 1985 in order to join the EEC, the remaining EFTA members decided to continue the Portugal Fund so that Portugal would continue to benefit from it. The Fund originally took the form of a low-interest loan from the EFTA member states to the value of US$100 million. Repayment was originally to commence in 1988, however, EFTA then decided to postpone the start of repayments until 1998. The Portugal Fund was dissolved in January 2002.
[ { "paragraph_id": 0, "text": "The European Free Trade Association (EFTA) is a regional trade organization and free trade area consisting of four European states: Iceland, Liechtenstein, Norway and Switzerland. The organization operates in parallel with the European Union (EU), and all four member states participate in the European Single Market and are part of the Schengen Area. They are not, however, party to the European Union Customs Union.", "title": "" }, { "paragraph_id": 1, "text": "EFTA was historically one of the two dominant western European trade blocs, but is now much smaller and closely associated with its historical competitor, the European Union. It was established on 3 May 1960 to serve as an alternative trade bloc for those European states that were unable or unwilling to join the then European Economic Community (EEC), the main predecessor of the EU. The Stockholm Convention (1960), to establish the EFTA, was signed on 4 January 1960 in the Swedish capital by seven countries (known as the \"outer seven\": Austria, Denmark, Norway, Portugal, Sweden, Switzerland and the United Kingdom). A revised Convention, the Vaduz Convention, was signed on 21 June 2001 and entered into force on 1 June 2002.", "title": "" }, { "paragraph_id": 2, "text": "Since 1995, only two founding members remain, namely Norway and Switzerland. The other five, Austria, Denmark, Portugal, Sweden and the United Kingdom, had joined the EU at some point in the intervening years. The initial Stockholm Convention was superseded by the Vaduz Convention, which aimed to provide a successful framework for continuing the expansion and liberalization of trade, both among the organization's member states and with the rest of the world.", "title": "" }, { "paragraph_id": 3, "text": "Whilst the EFTA is not a customs union and member states have full rights to enter into bilateral third-country trade arrangements, it does have a coordinated trade policy. As a result, its member states have jointly concluded free trade agreements with the EU and a number of other countries. To participate in the EU's single market, Iceland, Liechtenstein, and Norway are parties to the Agreement on a European Economic Area (EEA), with compliances regulated by the EFTA Surveillance Authority and the EFTA Court. Switzerland has a set of multilateral agreements with the EU and its member states instead.", "title": "" }, { "paragraph_id": 4, "text": "On 12 January 1960, the Convention establishing the European Free Trade Association was initiated in the Golden Hall of the Stockholm City Hall. This established the progressive elimination of customs duties on industrial products, but did not affect agricultural or fisheries products.", "title": "Membership" }, { "paragraph_id": 5, "text": "The main difference between the early EEC and the EFTA was that the latter did not operate common external customs tariffs unlike the former: each EFTA member was free to establish its individual customs duties against, or its individual free trade agreements with, non-EFTA countries.", "title": "Membership" }, { "paragraph_id": 6, "text": "The founding members of the EFTA were: Austria, Denmark, Norway, Portugal, Sweden, Switzerland and the United Kingdom. During the 1960s, these countries were often referred to as the \"Outer Seven\", as opposed to the Inner Six of the then European Economic Community (EEC).", "title": "Membership" }, { "paragraph_id": 7, "text": "Finland became an associate member in 1961 and a full member in 1986, and Iceland joined in 1970. The United Kingdom and Denmark joined the EEC in 1973 and hence ceased to be EFTA members. Portugal also left EFTA for the European Community in 1986. Liechtenstein joined the EFTA in 1991 (previously its interests had been represented by Switzerland). Austria, Sweden, and Finland joined the EU in 1995 and thus ceased to be EFTA members.", "title": "Membership" }, { "paragraph_id": 8, "text": "Twice, in 1972 and in 1994, the Norwegian government had tried to join the EU (still the EEC, in 1973) and by doing so, leave the EFTA. However, both the times, the membership of the EU was rejected in national referendums, keeping Norway in the EFTA. Iceland applied for EU membership in 2009 due to the 2008–2011 Icelandic financial crisis, but has since dropped its bid.", "title": "Membership" }, { "paragraph_id": 9, "text": "Between 1994 and 2011, EFTA memberships for Andorra, San Marino, Monaco, the Isle of Man, Turkey, Israel, Morocco, and other European Neighbourhood Policy partners were discussed.", "title": "Membership" }, { "paragraph_id": 10, "text": "In November 2012, after the Council of the European Union had called for an evaluation of the EU's relations with Andorra, Monaco, and San Marino, which they described as \"fragmented\", the European Commission published a report outlining the options for their further integration into the EU. Unlike Liechtenstein, which is a member of the EEA via the EFTA and the Schengen Agreement, relations with these three states are based on a collection of agreements covering specific issues. The report examined four alternatives to the current situation:", "title": "Membership" }, { "paragraph_id": 11, "text": "However, the Commission argued that the sectoral approach did not address the major issues and was still needlessly complicated, while EU membership was dismissed in the near future because \"the EU institutions are currently not adapted to the accession of such small-sized countries\". The remaining options, EEA membership and a FAA with the states, were found to be viable and were recommended by the commission. In response, the Council requested that negotiations with the three microstates on further integration continue, and that a report be prepared by the end of 2013 detailing the implications of the two viable alternatives and recommendations on how to proceed.", "title": "Membership" }, { "paragraph_id": 12, "text": "As EEA membership is currently only open to EFTA or EU member states, the consent of existing EFTA member states is required for the microstates to join the EEA without becoming members of the EU. In 2011, Jonas Gahr Støre, then Foreign Minister of Norway which is an EFTA member state, said that EFTA/EEA membership for the microstates was not the appropriate mechanism for their integration into the internal market due to their different requirements from those of larger countries such as Norway, and suggested that a simplified association would be better suited for them. Espen Barth Eide, Støre's successor, responded to the commission's report in late 2012 by questioning whether the microstates have sufficient administrative capabilities to meet the obligations of EEA membership. However, he stated that Norway would be open to the possibility of EFTA membership for the microstates if they decided to submit an application, and that the country had not made a final decision on the matter. Pascal Schafhauser, the Counsellor of the Liechtenstein Mission to the EU, said that Liechtenstein, another EFTA member state, was willing to discuss EEA membership for the microstates provided their joining did not impede the functioning of the organization. However, he suggested that the option of direct membership in the EEA for the microstates, outside of both the EFTA and the EU, should be considered. On 18 November 2013, the EU Commission concluded that \"the participation of the small-sized countries in the EEA is not judged to be a viable option at present due to the political and institutional reasons\", and that Association Agreements were a more feasible mechanism to integrate the microstates into the internal market.", "title": "Membership" }, { "paragraph_id": 13, "text": "The Norwegian electorate had rejected treaties of accession to the EU in two referendums. At the time of the first referendum in 1972, their neighbour, Denmark joined. Since the second referendum in 1994, two other Nordic neighbours, Sweden and Finland, have joined the EU. The last two governments of Norway have not advanced the question, as they have both been coalition governments consisting of proponents and opponents of EU membership.", "title": "Membership" }, { "paragraph_id": 14, "text": "Since Switzerland rejected the EEA membership in a referendum in 1992, more referendums on EU membership have been initiated, the last time being in 2001. These were all rejected. Switzerland has been in a customs union with fellow EFTA member state and neighbour Liechtenstein since 1924.", "title": "Membership" }, { "paragraph_id": 15, "text": "On 16 July 2009, the government of Iceland formally applied for EU membership, but the negotiation process was suspended in mid-2013, and in 2015 the foreign ministers wrote to withdraw its application.", "title": "Membership" }, { "paragraph_id": 16, "text": "Denmark was a founding member of EFTA in 1960, but its membership ended in 1973, when it joined the European Communities. The autonomous territories of the Kingdom of Denmark were covered by Denmark's EFTA membership: Greenland from 1961 and the Faroe Islands from 1968. In mid-2005, representatives of the Faroe Islands raised the possibility of their territory re-joining the EFTA. Because Article 56 of the EFTA Convention only allows sovereign states to become members of the EFTA, the Faroes considered the possibility that the \"Kingdom of Denmark in respect of the Faroes\" could join the EFTA on their behalf. The Danish Government has stated that this mechanism would not allow the Faroes to become a member of the EEA because Denmark was already a party to the EEA Agreement.", "title": "Membership" }, { "paragraph_id": 17, "text": "The Faroes already have an extensive bilateral free trade agreement with Iceland, known as the Hoyvík Agreement.", "title": "Membership" }, { "paragraph_id": 18, "text": "The United Kingdom was a co-founder of EFTA in 1960, but ceased to be a member upon joining the European Economic Community. The country held a referendum in 2016 on withdrawing from the EU (popularly referred to as \"Brexit\"), resulting in a 51.9% vote in favour of withdrawing. A 2013 research paper presented to the Parliament of the United Kingdom proposed a number of alternatives to EU membership which would continue to allow it access to the EU's internal market, including continuing EEA membership as an EFTA member state, or the Swiss model of a number of bilateral treaties covering the provisions of the single market.", "title": "Membership" }, { "paragraph_id": 19, "text": "In the first meeting since the Brexit vote, EFTA reacted by saying both that they were open to a UK return, and that Britain has many issues to work through. The president of Switzerland Johann Schneider-Ammann stated that its return would strengthen the association. However, in August 2016 the Norwegian Government expressed reservations. Norway's European affairs minister, Elisabeth Vik Aspaker, told the Aftenposten newspaper: \"It's not certain that it would be a good idea to let a big country into this organization. It would shift the balance, which is not necessarily in Norway's interests.\"", "title": "Membership" }, { "paragraph_id": 20, "text": "In late 2016, the Scottish First Minister said that her priority was to keep the whole of the UK in the European single market but that taking Scotland alone into the EEA was an option being \"looked at\". However, other EFTA states have stated that only sovereign states are eligible for membership, so it could only join if it became independent from the UK, unless the solution scouted for the Faroes in 2005 were to be adopted (see above).", "title": "Membership" }, { "paragraph_id": 21, "text": "In early 2018, British MPs Antoinette Sandbach, Stephen Kinnock and Stephen Hammond all called for the UK to rejoin EFTA.", "title": "Membership" }, { "paragraph_id": 22, "text": "In 1992, the EU, its member states, and the EFTA member states signed the Agreement on the European Economic Area in Porto, Portugal. However, the proposal that Switzerland ratify its participation was rejected by referendum. (Nevertheless, Switzerland has multiple bilateral treaties with the EU that allow it to participate in the European Single Market, the Schengen Agreement and other programmes). Thus, except for Switzerland, the EFTA members are also members of the European Economic Area (EEA). The EEA comprises three member states of the European Free Trade Association (EFTA) and 27 member states of the European Union (EU), including Croatia which the agreement is provisionally applied to, pending its ratification by all contracting parties. It was established on 1 January 1994 following an agreement with the European Economic Community (which had become the European Community two months earlier). It allows the EFTA-EEA states to participate in the EU's Internal Market without being members of the EU. They adopt almost all EU legislation related to the single market, except laws on agriculture and fisheries. However, they also contribute to and influence the formation of new EEA relevant policies and legislation at an early stage as part of a formal decision-shaping process. One EFTA member, Switzerland, has not joined the EEA but has a series of bilateral agreements, including a free trade agreement, with the EU.", "title": "Relationship with the European Union: the European Economic Area" }, { "paragraph_id": 23, "text": "The following table summarises the various components of EU laws applied in the EFTA countries and their sovereign territories. Some territories of EU member states also have a special status in regard to EU laws applied as is the case with some European microstates.", "title": "Relationship with the European Union: the European Economic Area" }, { "paragraph_id": 24, "text": "A Joint Committee consisting of the EEA-EFTA States plus the European Commission (representing the EU) has the function of extending relevant EU law to the non EU members. An EEA Council meets twice yearly to govern the overall relationship between the EEA members.", "title": "Relationship with the European Union: the European Economic Area" }, { "paragraph_id": 25, "text": "Rather than setting up pan-EEA institutions, the activities of the EEA are regulated by the EFTA Surveillance Authority and the EFTA Court. The EFTA Surveillance Authority and the EFTA Court regulate the activities of the EFTA members in respect of their obligations in the European Economic Area (EEA). Since Switzerland is not an EEA member, it does not participate in these institutions.", "title": "Relationship with the European Union: the European Economic Area" }, { "paragraph_id": 26, "text": "The EFTA Surveillance Authority performs a role for EFTA members that is equivalent to that of the European Commission for the EU, as \"guardian of the treaties\" and the EFTA Court performs the European Court of Justice-equivalent role.", "title": "Relationship with the European Union: the European Economic Area" }, { "paragraph_id": 27, "text": "The original plan for the EEA lacked the EFTA Court: the European Court of Justice was to exercise those roles. However, during the negotiations for the EEA agreement, the European Court of Justice ruled by the Opinion 1/91 that it would be a violation of the treaties to give to the EU institutions these powers with respect to non-EU member states. Therefore, the current arrangement was developed instead.", "title": "Relationship with the European Union: the European Economic Area" }, { "paragraph_id": 28, "text": "The EEA and Norway Grants are the financial contributions of Iceland, Liechtenstein and Norway to reduce social and economic disparities in Europe. They were established in conjunction with the 2004 enlargement of the European Economic Area (EEA), which brought together the EU, Iceland, Liechtenstein and Norway in the Internal Market. In the period from 2004 to 2009, €1.3 billion of project funding was made available for project funding in the 15 beneficiary states in Central and Southern Europe. The EEA and Norway Grants are administered by the Financial Mechanism Office, which is affiliated to the EFTA Secretariat in Brussels.", "title": "Relationship with the European Union: the European Economic Area" }, { "paragraph_id": 29, "text": "EFTA also originated the Hallmarking Convention and the Pharmaceutical Inspection Convention, both of which are open to non-EFTA states.", "title": "International conventions" }, { "paragraph_id": 30, "text": "EFTA has 29 free trade agreements with non-EU countries as well as declarations on cooperation and joint workgroups to improve trade. Currently, the EFTA States have established preferential trade relations with 40 states and territories, in addition to the 27 member states of the European Union.", "title": "International trade relations" }, { "paragraph_id": 31, "text": "EFTA's interactive Free Trade Map gives an overview of the partners worldwide.", "title": "International trade relations" }, { "paragraph_id": 32, "text": "Negotiations currently on hold", "title": "International trade relations" }, { "paragraph_id": 33, "text": "Declarations on cooperation", "title": "International trade relations" }, { "paragraph_id": 34, "text": "The following agreements are no longer active:", "title": "International trade relations" }, { "paragraph_id": 35, "text": "EFTA member states' citizens enjoy freedom of movement in each other's territories in accordance with the EFTA convention. EFTA & EEA nationals also enjoy freedom of movement in the European Union (EU). EFTA nationals and EU citizens are not only visa-exempt but are legally entitled to enter and reside in each other's countries. The Citizens' Rights Directive (also sometimes called the \"Free Movement Directive\") defines the right of free movement for citizens of the European Economic Area (EEA), which includes the three EFTA members Iceland, Norway and Liechtenstein plus the member states of the EU. Switzerland, which is a member of EFTA but not of the EEA, is not bound by the Directive but rather has a separate multilateral agreement on free movement with the EU and its member states.", "title": "Travel policies" }, { "paragraph_id": 36, "text": "As a result, a citizen of an EFTA country can live and work in all the other EFTA countries and in all the EU countries, and a citizen of an EU country can live and work in all the EFTA countries (but for voting and working in sensitive fields, such as government / police / military, citizenship is often required, and non-citizens may not have the same rights to welfare and unemployment benefits as citizens).", "title": "Travel policies" }, { "paragraph_id": 37, "text": "The Portugal Fund came into operation in February 1977 when Portugal was still a member of EFTA. It was to provide funding for the development of Portugal after the Carnation Revolution and the consequential restoration of democracy and the decolonization of the country's overseas possessions. This followed a period of economic sanctions by most of the international community, which left Portugal economically underdeveloped compared to the rest of the western Europe. When Portugal left EFTA in 1985 in order to join the EEC, the remaining EFTA members decided to continue the Portugal Fund so that Portugal would continue to benefit from it. The Fund originally took the form of a low-interest loan from the EFTA member states to the value of US$100 million. Repayment was originally to commence in 1988, however, EFTA then decided to postpone the start of repayments until 1998. The Portugal Fund was dissolved in January 2002.", "title": "Portugal Fund" } ]
The European Free Trade Association (EFTA) is a regional trade organization and free trade area consisting of four European states: Iceland, Liechtenstein, Norway and Switzerland. The organization operates in parallel with the European Union (EU), and all four member states participate in the European Single Market and are part of the Schengen Area. They are not, however, party to the European Union Customs Union. EFTA was historically one of the two dominant western European trade blocs, but is now much smaller and closely associated with its historical competitor, the European Union. It was established on 3 May 1960 to serve as an alternative trade bloc for those European states that were unable or unwilling to join the then European Economic Community (EEC), the main predecessor of the EU. The Stockholm Convention (1960), to establish the EFTA, was signed on 4 January 1960 in the Swedish capital by seven countries. A revised Convention, the Vaduz Convention, was signed on 21 June 2001 and entered into force on 1 June 2002. Since 1995, only two founding members remain, namely Norway and Switzerland. The other five, Austria, Denmark, Portugal, Sweden and the United Kingdom, had joined the EU at some point in the intervening years. The initial Stockholm Convention was superseded by the Vaduz Convention, which aimed to provide a successful framework for continuing the expansion and liberalization of trade, both among the organization's member states and with the rest of the world. Whilst the EFTA is not a customs union and member states have full rights to enter into bilateral third-country trade arrangements, it does have a coordinated trade policy. As a result, its member states have jointly concluded free trade agreements with the EU and a number of other countries. To participate in the EU's single market, Iceland, Liechtenstein, and Norway are parties to the Agreement on a European Economic Area (EEA), with compliances regulated by the EFTA Surveillance Authority and the EFTA Court. Switzerland has a set of multilateral agreements with the EU and its member states instead.
2001-07-25T11:30:32Z
2023-12-25T03:25:37Z
[ "Template:Citation", "Template:Europe topics (small)", "Template:Nts", "Template:Main", "Template:Supranational European Bodies", "Template:Cite web", "Template:SWE", "Template:AUT", "Template:European Free Trade Association (EFTA)", "Template:LIE", "Template:Flagdeco", "Template:Flagicon image", "Template:UK", "Template:Legend", "Template:No", "Template:Cbignore", "Template:Citation needed", "Template:Cite magazine", "Template:Commons category-inline", "Template:Use dmy dates", "Template:Infobox Geopolitical organization", "Template:Dubious", "Template:Small", "Template:NOR", "Template:CHE", "Template:Economics", "Template:Distinguish", "Template:Dts", "Template:Nowrap", "Template:Yes", "Template:European Economic Area (EEA)", "Template:See also", "Template:Partial", "Template:Source needed", "Template:Reflist", "Template:Cite news", "Template:Authority control", "Template:Use British English", "Template:Country", "Template:ISL", "Template:Flag", "Template:Short description", "Template:Redirect", "Template:UN Population", "Template:Smaller" ]
https://en.wikipedia.org/wiki/European_Free_Trade_Association
9,581
European Parliament
The European Parliament (EP) is one of the legislative bodies of the European Union and one of its seven institutions. Together with the Council of the European Union (known as the Council and informally as the Council of Ministers), it adopts European legislation, following a proposal by the European Commission. The Parliament is composed of 705 members (MEPs). It represents the second-largest democratic electorate in the world (after the Parliament of India), with an electorate of 375 million eligible voters in 2009. Since 1979, the Parliament has been directly elected every five years by the citizens of the European Union through universal suffrage. Voter turnout in parliamentary elections decreased each time after 1979 until 2019, when voter turnout increased by eight percentage points, and rose above 50% for the first time since 1994. The voting age is 18 in all EU member states except for Malta, Austria and Germany, where it is 16, and Greece, where it is 17. Belgian citizens can request to vote from the age of 16 as well. Although the European Parliament has legislative power, as does the Council, it does not formally possess the right of initiative as most national parliaments of the member states do, with the right of initiative being solely a prerogative of the European Commission. The Parliament is the "first institution" of the European Union (mentioned first in its treaties and having ceremonial precedence over the other EU institutions), and shares equal legislative and budgetary powers with the Council (except on a few issues where special legislative procedures apply). It likewise has equal control over the EU budget. Ultimately, the European Commission, which serves as the executive branch of the EU, is accountable to Parliament. In particular, Parliament can decide whether or not to approve the European Council's nominee for President of the Commission, and is further tasked with approving (or rejecting) the appointment of the commission as a whole. It can subsequently force the current Commission to resign by adopting a motion of censure. The president of the European Parliament is the body's speaker and presides over the multi-party chamber. The five largest political groups are the European People's Party Group (EPP), the Progressive Alliance of Socialists and Democrats (S&D), Renew Europe (previously ALDE), the Greens/European Free Alliance (Greens/EFA) and Identity and Democracy (ID). The last EU-wide election was held in 2019. The Parliament's headquarters are in Strasbourg, France, and has its administrative offices in Luxembourg City. Plenary sessions are "normally held in Strasbourg for four days a month, but sometimes there are additional sessions in Brussels", while the Parliament's committee meetings are held primarily in Brussels, Belgium. The Parliament, like the other EU institutions, was not designed in its current form when it first met on 10 September 1952. One of the oldest common institutions, it began as the Common Assembly of the European Coal and Steel Community (ECSC). It was a consultative assembly of 78 appointed parliamentarians drawn from the national parliaments of member states, having no legislative powers. The change since its foundation was highlighted by Professor David Farrell of the University of Manchester: "For much of its life, the European Parliament could have been justly labelled a 'multi-lingual talking shop'." Its development since its foundation shows how the European Union's structures have evolved without a clear 'master plan'. Tom Reid of The Washington Post has said of the union that "nobody would have deliberately designed a government as complex and as redundant as the EU". Even the Parliament's three working locations, which have switched several times, are a result of various agreements or lack of agreements. Although most MEPs would prefer to be based just in Brussels, at John Major's 1992 Edinburgh summit, France engineered a treaty amendment to confirm the European Parliament's seat permanently in Strasbourg. The body was not mentioned in the original Schuman Declaration. It was assumed or hoped that difficulties with the British would be resolved to allow the Parliamentary Assembly of the Council of Europe to perform legislative tasks. A separate Assembly was introduced during negotiations on the Treaty as an institution to counterbalance and monitor the executive while providing democratic legitimacy. The wording of the ECSC Treaty demonstrated leaders' desire for more than a normal consultative assembly by allowing for direct election and using the term "representatives of the people". Its early importance was highlighted when the Assembly was given the task of drawing up the draft treaty to establish a European Political Community. By this document, the Ad Hoc Assembly was established on 13 September 1952 with extra members, but after the failure of the negotiated and proposed European Defence Community (French parliament veto), the project was dropped. Despite this, the European Economic Community and Euratom were established in 1958 by the Treaties of Rome. The Common Assembly was shared by all three communities (which had separate executives) and it renamed itself the European Parliamentary Assembly. The first meeting was held on 19 March 1958 having been set up in Luxembourg City, it elected Schuman as its president and on 13 May it rearranged itself to sit according to political ideology rather than nationality. This is seen as the birth of the modern European Parliament, with Parliament's 50 years celebrations being held in March 2008 rather than 2002. The three communities merged their remaining organs as the European Communities in 1967, and the body's name was changed to the current "European Parliament" in 1962. In 1970 the Parliament was granted power over areas of the Communities' budget, which were expanded to the whole budget in 1975. Under the Rome Treaties, the Parliament should have become elected. However, the Council was required to agree a uniform voting system beforehand, which it failed to do. The Parliament threatened to take the Council to the European Court of Justice; this led to a compromise whereby the Council would agree to elections, but the issue of voting systems would be put off until a later date. For its sessions the assembly, and later the parliament, until 1999 convened in the same premises as the Parliamentary Assembly of the Council of Europe: the House of Europe until 1977, and the Palace of Europe until 1999. In 1979, its members were directly elected for the first time. This sets it apart from similar institutions such as those of the Parliamentary Assembly of the Council of Europe or Pan-African Parliament which are appointed. After that first election, the parliament held its first session on 17 July 1979, electing Simone Veil MEP as its president. Veil was also the first female president of the Parliament since it was formed as the Common Assembly. As an elected body, the Parliament began to draft proposals addressing the functioning of the EU. For example, in 1984, inspired by its previous work on the Political Community, it drafted the "draft Treaty establishing the European Union" (also known as the 'Spinelli Plan' after its rapporteur Altiero Spinelli MEP). Although it was not adopted, many ideas were later implemented by other treaties. Furthermore, the Parliament began holding votes on proposed Commission Presidents from the 1980s, before it was given any formal right to veto. Since it became an elected body, the membership of the European Parliament has simply expanded whenever new nations have joined (the membership was also adjusted upwards in 1994 after German reunification). Following this, the Treaty of Nice imposed a cap on the number of members to be elected: 732. Like the other institutions, the Parliament's seat was not yet fixed. The provisional arrangements placed Parliament in Strasbourg, while the Commission and Council had their seats in Brussels. In 1985 the Parliament, wishing to be closer to these institutions, built a second chamber in Brussels and moved some of its work there despite protests from some states. A final agreement was eventually reached by the European Council in 1992. It stated the Parliament would retain its formal seat in Strasbourg, where twelve sessions a year would be held, but with all other parliamentary activity in Brussels. This two-seat arrangement was contested by the Parliament, but was later enshrined in the Treaty of Amsterdam. To this day the institution's locations are a source of contention. The Parliament gained more powers from successive treaties, namely through the extension of the ordinary legislative procedure (then called the codecision procedure), and in 1999, the Parliament forced the resignation of the Santer Commission. The Parliament had refused to approve the Community budget over allegations of fraud and mis-management in the commission. The two main parties took on a government-opposition dynamic for the first time during the crisis which ended in the Commission resigning en masse, the first of any forced resignation, in the face of an impending censure from the Parliament. In 2004, following the largest trans-national election in history, despite the European Council choosing a President from the largest political group (the EPP), the Parliament again exerted pressure on the commission. During the Parliament's hearings of the proposed Commissioners MEPs raised doubts about some nominees with the Civil Liberties committee rejecting Rocco Buttiglione from the post of Commissioner for Justice, Freedom and Security over his views on homosexuality. That was the first time the Parliament had ever voted against an incoming Commissioner and despite Barroso's insistence upon Buttiglione the Parliament forced Buttiglione to be withdrawn. A number of other Commissioners also had to be withdrawn or reassigned before Parliament allowed the Barroso Commission to take office. Along with the extension of the ordinary legislative procedure, the Parliament's democratic mandate has given it greater control over legislation against the other institutions. In voting on the Bolkestein directive in 2006, the Parliament voted by a large majority for over 400 amendments that changed the fundamental principle of the law. The Financial Times described it in the following terms: That is where the European parliament has suddenly come into its own. It marks another shift in power between the three central EU institutions. Last week's vote suggests that the directly elected MEPs, in spite of their multitude of ideological, national and historical allegiances, have started to coalesce as a serious and effective EU institution, just as enlargement has greatly complicated negotiations inside both the Council and Commission. In 2007, for the first time, Justice Commissioner Franco Frattini included Parliament in talks on the second Schengen Information System even though MEPs only needed to be consulted on parts of the package. After that experiment, Frattini indicated he would like to include Parliament in all justice and criminal matters, informally pre-empting the new powers they were due to gain in 2009 as part of the Treaty of Lisbon. Between 2007 and 2009, a special working group on parliamentary reform implemented a series of changes to modernise the institution such as more speaking time for rapporteurs, increased committee co-operation and other efficiency reforms. The Lisbon Treaty came into force on 1 December 2009, granting Parliament powers over the entire EU budget, making Parliament's legislative powers equal to the Council's in nearly all areas and linking the appointment of the Commission President to Parliament's own elections. Barroso gained the support of the European Council for a second term and secured majority support from the Parliament in September 2009. Parliament voted 382 votes in favour and 219 votes against (117 abstentions) with support of the European People's Party, European Conservatives and Reformists and the Alliance of Liberals and Democrats for Europe. The liberals gave support after Barroso gave them a number of concessions; the liberals previously joined the socialists' call for a delayed vote (the EPP had wanted to approve Barroso in July of that year). Once Barroso put forward the candidates for his next Commission, another opportunity to gain concessions arose. Bulgarian nominee Rumiana Jeleva was forced to step down by Parliament due to concerns over her experience and financial interests. She only had the support of the EPP which began to retaliate on left wing candidates before Jeleva gave in and was replaced (setting back the final vote further). Before the final vote, Parliament demanded a number of concessions as part of a future working agreement under the new Lisbon Treaty. The deal includes that Parliament's president will attend high level Commission meetings. Parliament will have a seat in the EU's Commission-led international negotiations and have a right to information on agreements. However, Parliament secured only an observer seat. Parliament also did not secure a say over the appointment of delegation heads and special representatives for foreign policy. Although they will appear before parliament after they have been appointed by the High Representative. One major internal power was that Parliament wanted a pledge from the Commission that it would put forward legislation when parliament requests. Barroso considered this an infringement on the commission's powers but did agree to respond within three months. Most requests are already responded to positively. During the setting up of the European External Action Service (EEAS), Parliament used its control over the EU budget to influence the shape of the EEAS. MEPs had aimed at getting greater oversight over the EEAS by linking it to the commission and having political deputies to the High Representative. MEPs did not manage to get everything they demanded. However, they got broader financial control over the new body. In December 2017, Politico denounced the lack of racial diversity among Members of the European Parliament. The subsequent news coverage contributed to create the Brussels So White movement. In January 2019, Conservative MEPs supported proposals to boost opportunities for women and tackle sexual harassment in the European Parliament. In 2022, four people were arrested because of corruption. This came to be known as the Qatar corruption scandal at the European Parliament. In October 2023, the Parliament adopted a resolution to condemn "Hamas' despicable terrorist attacks against Israel". The Parliament and Council have been compared to the two chambers of a bicameral legislature. However, there are some differences from national legislatures; for example, neither the Parliament nor the Council have the power of legislative initiative (except for the fact that the Council has the power in some intergovernmental matters). In Community matters, this is a power uniquely reserved for the European Commission (the executive). Therefore, while Parliament can amend and reject legislation, to make a proposal for legislation, it needs the commission to draft a bill before anything can become law. The value of such a power has been questioned by noting that in the national legislatures of the member states 85% of initiatives introduced without executive support fail to become law. Yet it has been argued by former Parliament president Hans-Gert Pöttering that as the Parliament does have the right to ask the commission to draft such legislation, and as the commission is following Parliament's proposals more and more Parliament does have a de facto right of legislative initiative. The Parliament also has a great deal of indirect influence, through non-binding resolutions and committee hearings, as a "pan-European soapbox" with the ear of thousands of Brussels-based journalists. There is also an indirect effect on foreign policy; the Parliament must approve all development grants, including those overseas. For example, the support for post-war Iraq reconstruction, or incentives for the cessation of Iranian nuclear development, must be supported by the Parliament. Parliamentary support was also required for the transatlantic passenger data-sharing deal with the United States. Finally, Parliament holds a non-binding vote on new EU treaties but cannot veto it. However, when Parliament threatened to vote down the Nice Treaty, the Belgian and Italian Parliaments said they would veto the treaty on the European Parliament's behalf. With each new treaty, the powers of the Parliament, in terms of its role in the Union's legislative procedures, have expanded. The procedure which has slowly become dominant is the "ordinary legislative procedure" (previously named "codecision procedure"), which provides an equal footing between Parliament and Council. In particular, under the procedure, the Commission presents a proposal to Parliament and the Council which can only become law if both agree on a text, which they do (or not) through successive readings up to a maximum of three. In its first reading, Parliament may send amendments to the Council which can either adopt the text with those amendments or send back a "common position". That position may either be approved by Parliament, or it may reject the text by an absolute majority, causing it to fail, or it may adopt further amendments, also by an absolute majority. If the Council does not approve these, then a "Conciliation Committee" is formed. The committee is composed of the Council members plus an equal number of MEPs who seek to agree a compromise. Once a position is agreed, it has to be approved by Parliament, by a simple majority. This is also aided by Parliament's mandate as the only directly democratic institution, which has given it leeway to have greater control over legislation than other institutions, for example over its changes to the Bolkestein directive in 2006. The few other areas that operate the special legislative procedures are justice and home affairs, budget and taxation, and certain aspects of other policy areas, such as the fiscal aspects of environmental policy. In these areas, the Council or Parliament decide law alone. The procedure also depends upon which type of institutional act is being used. The strongest act is a regulation, an act or law which is directly applicable in its entirety. Then there are directives which bind member states to certain goals which they must achieve. They do this through their own laws and hence have room to manoeuvre in deciding upon them. A decision is an instrument which is focused at a particular person or group and is directly applicable. Institutions may also issue recommendations and opinions which are merely non-binding, declarations. There is a further document which does not follow normal procedures, this is a "written declaration" which is similar to an early day motion used in the Westminster system. It is a document proposed by up to five MEPs on a matter within the EU's activities used to launch a debate on that subject. Having been posted outside the entrance to the hemicycle, members can sign the declaration and if a majority do so it is forwarded to the President and announced to the plenary before being forwarded to the other institutions and formally noted in the minutes. The legislative branch officially holds the Union's budgetary authority with powers gained through the Budgetary Treaties of the 1970s and the Lisbon Treaty. The EU budget is subject to a form of the ordinary legislative procedure with a single reading giving Parliament power over the entire budget (before 2009, its influence was limited to certain areas) on an equal footing to the Council. If there is a disagreement between them, it is taken to a conciliation committee as it is for legislative proposals. If the joint conciliation text is not approved, the Parliament may adopt the budget definitively. The Parliament is also responsible for discharging the implementation of previous budgets based on the annual report of the European Court of Auditors. It has refused to approve the budget only twice, in 1984 and in 1998. On the latter occasion it led to the resignation of the Santer Commission; highlighting how the budgetary power gives Parliament a great deal of power over the commission. Parliament also makes extensive use of its budgetary, and other powers, elsewhere; for example in the setting up of the European External Action Service, Parliament has a de facto veto over its design as it has to approve the budgetary and staff changes. The President of the European Commission is proposed by the European Council on the basis of the European elections to Parliament. That proposal has to be approved by the Parliament (by a simple majority) who "elect" the President according to the treaties. Following the approval of the Commission President, the members of the commission are proposed by the President in accord with the member states. Each Commissioner comes before a relevant parliamentary committee hearing covering the proposed portfolio. They are then, as a body, approved or rejected by the Parliament. In practice, the Parliament has never voted against a President or his Commission, but it did seem likely when the Barroso Commission was put forward. The resulting pressure forced the proposal to be withdrawn and changed to be more acceptable to parliament. That pressure was seen as an important sign by some of the evolving nature of the Parliament and its ability to make the Commission accountable, rather than being a rubber stamp for candidates. Furthermore, in voting on the commission, MEPs also voted along party lines, rather than national lines, despite frequent pressure from national governments on their MEPs. This cohesion and willingness to use the Parliament's power ensured greater attention from national leaders, other institutions and the public – who previously gave the lowest ever turnout for the Parliament's elections. The Parliament also has the power to censure the Commission if they have a two-thirds majority which will force the resignation of the entire Commission from office. As with approval, this power has never been used but it was threatened to the Santer Commission, who subsequently resigned of their own accord. There are a few other controls, such as: the requirement of Commission to submit reports to the Parliament and answer questions from MEPs; the requirement of the President-in-office of the Council to present its programme at the start of their presidency; the obligation on the President of the European Council to report to Parliament after each of its meetings; the right of MEPs to make requests for legislation and policy to the commission; and the right to question members of those institutions (e.g. "Commission Question Time" every Tuesday). At present, MEPs may ask a question on any topic whatsoever, but in July 2008 MEPs voted to limit questions to those within the EU's mandate and ban offensive or personal questions. The Parliament also has other powers of general supervision, mainly granted by the Maastricht Treaty. The Parliament has the power to set up a Committee of Inquiry, for example over mad cow disease or CIA detention flights – the former led to the creation of the European veterinary agency. The Parliament can call other institutions to answer questions and if necessary to take them to court if they break EU law or treaties. Furthermore, it has powers over the appointment of the members of the Court of Auditors and the president and executive board of the European Central Bank. The ECB president is also obliged to present an annual report to the parliament. The European Ombudsman is elected by the Parliament, who deals with public complaints against all institutions. Petitions can also be brought forward by any EU citizen on a matter within the EU's sphere of activities. The Committee on Petitions hears cases, some 1500 each year, sometimes presented by the citizen themselves at the Parliament. While the Parliament attempts to resolve the issue as a mediator they do resort to legal proceedings if it is necessary to resolve the citizens dispute. The parliamentarians are known in English as Members of the European Parliament (MEPs). They are elected every five years by universal adult suffrage and sit according to political allegiance. About one third are women. Before the first direct elections, in 1979, they were appointed by their national parliaments. The Parliament has been criticized for underrepresentation of minority groups. In 2017, an estimated 17 MEPs were non-white, and of these, three were black, a disproportionately low number. According to activist organization European Network Against Racism, while an estimated 10% of Europe is composed of racial and ethnic minorities, only 5% of MEPs were members of such groups following the 2019 European Parliament election. Under the Lisbon Treaty, seats are allocated to each state according to population and the maximum number of members is set at 751 (however, as the President cannot vote while in the chair there will only be 750 voting members at any one time). Since 1 February 2020 and the United Kingdom's leaving the EU, 705 MEPs (including the president of the Parliament) sit in the European Parliament. Representation is currently limited to a maximum of 96 seats and a minimum of 6 seats per state and the seats are distributed according to "degressive proportionality", i.e., the larger the state, the more citizens are represented per MEP. As a result, Maltese and Luxembourgish voters have roughly 10x more influence per voter than citizens of the six largest countries. As of 2014, Germany (80.9 million inhabitants) has 96 seats (previously 99 seats), i.e. one seat for 843,000 inhabitants. Malta (0.4 million inhabitants) has 6 seats, i.e. one seat for 70,000 inhabitants. The new system implemented under the Lisbon Treaty, including revising the seating well before elections, was intended to avoid political horse trading when the allocations have to be revised to reflect demographic changes. Pursuant to this apportionment, the constituencies are formed. In four EU member states (Belgium, Ireland, Italy and Poland), the national territory is divided into a number of constituencies. In the remaining member states, the whole country forms a single constituency. All member states hold elections to the European Parliament using various forms of proportional representation. Due to the delay in ratifying the Lisbon Treaty, the seventh parliament was elected under the lower Nice Treaty cap. A small scale treaty amendment was ratified on 29 November 2011. This amendment brought in transitional provisions to allow the 18 additional MEPs created under the Lisbon Treaty to be elected or appointed before the 2014 election. Under the Lisbon Treaty reforms, Germany was the only state to lose members from 99 to 96. However, these seats were not removed until the 2014 election. Before 2009, members received the same salary as members of their national parliament. However, from 2009 a new members statute came into force, after years of attempts, which gave all members an equal monthly pay, of €8,484.05 each in 2016, subject to a European Union tax and which can also be taxed nationally. MEPs are entitled to a pension, paid by Parliament, from the age of 63. Members are also entitled to allowances for office costs and subsistence, and travelling expenses, based on actual cost. Besides their pay, members are granted a number of privileges and immunities. To ensure their free movement to and from the Parliament, they are accorded by their own states the facilities accorded to senior officials travelling abroad and, by other state governments, the status of visiting foreign representatives. When in their own state, they have all the immunities accorded to national parliamentarians, and, in other states, they have immunity from detention and legal proceedings. However, immunity cannot be claimed when a member is found committing a criminal offence and the Parliament also has the right to strip a member of their immunity. MEPs in Parliament are organised into eight different parliamentary groups, including thirty non-attached members known as non-inscrits. The two largest groups are the European People's Party (EPP) and the Socialists & Democrats (S&D). These two groups have dominated the Parliament for much of its life, continuously holding between 50 and 70 percent of the seats between them. No single group has ever held a majority in Parliament. As a result of being broad alliances of national parties, European group parties are very decentralised and hence have more in common with parties in federal states like Germany or the United States than unitary states like the majority of the EU states. Nevertheless, the European groups were actually more cohesive than their US counterparts between 2004 and 2009. Groups are often based on a single European political party such as the European People's Party. However, they can, like the liberal group, include more than one European party as well as national parties and independents. For a group to be recognised, it needs 23 MEPs from seven different countries. Groups receive funding from the parliament. Given that the Parliament does not form the government in the traditional sense of a Parliamentary system, its politics have developed along more consensual lines with dynamical coalitions rather than majority rule of competing parties and coalitions. Indeed, for much of its life it has been dominated by a grand coalition of the European People's Party and the Party of European Socialists. The two major parties tend to co-operate to find a compromise between their two groups leading to proposals endorsed by huge majorities. However, this does not always produce agreement, and each may instead try to build other alliances, the EPP normally with other centre-right or right wing Groups and the PES with centre-left or left wing groups. Sometimes, the Liberal Group is then in the pivotal position. There are also occasions where very sharp party political divisions have emerged, for example over the resignation of the Santer Commission. When the initial allegations against the Commission emerged, they were directed primarily against Édith Cresson and Manuel Marín, both socialist members. When the parliament was considering refusing to discharge the Community budget, President Jacques Santer stated that a no vote would be tantamount to a vote of no confidence. The Socialist group supported the commission and saw the issue as an attempt by the EPP to discredit their party ahead of the 1999 elections. Socialist leader, Pauline Green MEP, attempted a vote of confidence and the EPP put forward counter motions. During this period the two parties took on similar roles to a government-opposition dynamic, with the Socialists supporting the executive and EPP renouncing its previous coalition support and voting it down. Politicisation such as this has been increasing, in 2007 Simon Hix of the London School of Economics noted that: Our work also shows that politics in the European Parliament is becoming increasingly based around party and ideology. Voting is increasingly split along left-right lines, and the cohesion of the party groups has risen dramatically, particularly in the fourth and fifth parliaments. So there are likely to be policy implications here too. During the fifth term, 1999 to 2004, there was a break in the grand coalition resulting in a centre-right coalition between the Liberal and People's parties. This was reflected in the Presidency of the Parliament with the terms being shared between the EPP and the ELDR, rather than the EPP and Socialists. In the following term the liberal group grew to hold 88 seats, the largest number of seats held by any third party in Parliament. The EPP-S&D coalition lost their majority after the 2019 European Parliament election, requiring support by other political groups for a majority. Elections have taken place, directly in every member state, every five years since 1979. As of 2019 there have been nine elections. When a nation joins mid-term, a by-election will be held to elect their representatives. This has happened six times, most recently when Croatia joined in 2013. Elections take place across four days according to local custom and, apart from having to be proportional, the electoral system is chosen by the member state. This includes allocation of sub-national constituencies; while most members have a national list, some divide their allocation between regions. Seats are allocated to member states according to their population, since 2014 with no state having more than 96, but no fewer than 6, to maintain proportionality. The most recent Union-wide elections to the European Parliament were the European elections of 2019, held from 23 to 26 May 2019. They were the largest simultaneous transnational elections ever held anywhere in the world. The first session of the ninth parliament started 2 July 2019. European political parties have the exclusive right to campaign during the European elections (as opposed to their corresponding EP groups). There have been a number of proposals designed to attract greater public attention to the elections. One such innovation in the 2014 elections was that the pan-European political parties fielded "candidates" for president of the Commission, the so-called Spitzenkandidaten (German, "leading candidates" or "top candidates"). However, European Union governance is based on a mixture of intergovernmental and supranational features: the President of the European Commission is nominated by the European Council, representing the governments of the member states, and there is no obligation for them to nominate the successful "candidate". The Lisbon Treaty merely states that they should take account of the results of the elections when choosing whom to nominate. The so-called Spitzenkandidaten were Jean-Claude Juncker for the European People's Party, Martin Schulz for the Party of European Socialists, Guy Verhofstadt for the Alliance of Liberals and Democrats for Europe Party, Ska Keller and José Bové jointly for the European Green Party and Alexis Tsipras for the Party of the European Left. Turnout dropped consistently every year since the first election, and from 1999 until 2019 was below 50%. In 2007 both Bulgaria and Romania elected their MEPs in by-elections, having joined at the beginning of 2007. The Bulgarian and Romanian elections saw two of the lowest turnouts for European elections, just 28.6% and 28.3% respectively. This trend was interrupted in the 2019 election, when turnout increased by 8% EU-wide, rising to 50.6%, the highest since 1994. In England, Scotland and Wales, EP elections were originally held for a constituency MEP on a first-past-the-post basis. In 1999 the system was changed to a form of proportional representation where a large group of candidates stand for a post within a very large regional constituency. One could vote for a party, but not a candidate (unless that party had a single candidate). Each year the activities of the Parliament cycle between committee weeks where reports are discussed in committees and interparliamentary delegations meet, political group weeks for members to discuss work within their political groups and session weeks where members spend 3½ days in Strasbourg for part-sessions. In addition six 2-day part-sessions are organised in Brussels throughout the year. Four weeks are allocated as constituency week to allow members to do exclusively constituency work. Finally there are no meetings planned during the summer weeks. The Parliament has the power to meet without being convened by another authority. Its meetings are partly controlled by the treaties but are otherwise up to Parliament according to its own "Rules of Procedure" (the regulations governing the parliament). During sessions, members may speak after being called on by the President. Members of the Council or Commission may also attend and speak in debates. Partly due to the need for interpretation, and the politics of consensus in the chamber, debates tend to be calmer and more polite than, say, the Westminster system. Voting is conducted primarily by a show of hands, that may be checked on request by electronic voting. Votes of MEPs are not recorded in either case, however; that only occurs when there is a roll-call ballot. This is required for the final votes on legislation and also whenever a political group or 30 MEPs request it. The number of roll-call votes has increased with time. Votes can also be a completely secret ballot (for example, when the president is elected). All recorded votes, along with minutes and legislation, are recorded in the Official Journal of the European Union and can be accessed online. Votes usually do not follow a debate, but rather they are grouped with other due votes on specific occasions, usually at noon on Tuesdays, Wednesdays or Thursdays. This is because the length of the vote is unpredictable and if it continues for longer than allocated it can disrupt other debates and meetings later in the day. Members are arranged in a hemicycle according to their political groups (in the Common Assembly, prior to 1958, members sat alphabetically) who are ordered mainly by left to right, but some smaller groups are placed towards the outer ring of the Parliament. All desks are equipped with microphones, headphones for translation and electronic voting equipment. The leaders of the groups sit on the front benches at the centre, and in the very centre is a podium for guest speakers. The remaining half of the circular chamber is primarily composed of the raised area where the President and staff sit. Further benches are provided between the sides of this area and the MEPs, these are taken up by the Council on the far left and the commission on the far right. Both the Brussels and Strasbourg hemicycle roughly follow this layout with only minor differences. The hemicycle design is a compromise between the different Parliamentary systems. The British-based system has the different groups directly facing each other while the French-based system is a semicircle (and the traditional German system had all members in rows facing a rostrum for speeches). Although the design is mainly based on a semicircle, the opposite ends of the spectrum do still face each other. With access to the chamber limited, entrance is controlled by ushers who aid MEPs in the chamber (for example in delivering documents). The ushers can also occasionally act as a form of police in enforcing the President, for example in ejecting an MEP who is disrupting the session (although this is rare). The first head of protocol in the Parliament was French, so many of the duties in the Parliament are based on the French model first developed following the French Revolution. The 180 ushers are highly visible in the Parliament, dressed in black tails and wearing a silver chain, and are recruited in the same manner as the European civil service. The President is allocated a personal usher. The President is essentially the speaker of the Parliament and presides over the plenary when it is in session. The President's signature is required for all acts adopted by co-decision, including the EU budget. The President is also responsible for representing the Parliament externally, including in legal matters, and for the application of the rules of procedure. The President is elected for two-and-a-half-year terms, meaning two elections per parliamentary term. The current President of the European Parliament is Roberta Metsola, who was elected in January 2022. In most countries, the protocol of the head of state comes before all others; however, in the EU the Parliament is listed as the first institution, and hence the protocol of its president comes before any other European, or national, protocol. The gifts given to numerous visiting dignitaries depend upon the President. President Josep Borrell MEP of Spain gave his counterparts a crystal cup created by an artist from Barcelona who had engraved upon it parts of the Charter of Fundamental Rights among other things. A number of notable figures have been President of the Parliament and its predecessors. The first President was Paul-Henri Spaak MEP, one of the founding fathers of the Union. Other founding fathers include Alcide de Gasperi MEP and Robert Schuman MEP. The two female Presidents were Simone Veil MEP in 1979 (first President of the elected Parliament) and Nicole Fontaine MEP in 1999, both Frenchwomen. The previous president, Jerzy Buzek was the first East-Central European to lead an EU institution, a former Prime Minister of Poland who rose out of the Solidarity movement in Poland that helped overthrow communism in the Eastern Bloc. During the election of a President, the previous President (or, if unable to, one of the previous vice-presidents) presides over the chamber. Prior to 2009, the oldest member fulfilled this role but the rule was changed to prevent far-right French MEP Jean-Marie Le Pen taking the chair. Below the President, there are 14 Vice-Presidents who chair debates when the President is not in the chamber. There are a number of other bodies and posts responsible for the running of parliament besides these speakers. The two main bodies are the Bureau, which is responsible for budgetary and administration issues, and the Conference of Presidents which is a governing body composed of the presidents of each of the parliament's political groups. Looking after the financial and administrative interests of members are five Quaestors. As of 2014, the European Parliament budget was EUR 1.756 billion. A 2008 report on the Parliament's finances highlighted certain overspending and miss-payments. Despite some MEPs calling for the report to be published, Parliamentary authorities had refused until an MEP broke confidentiality and leaked it. The Parliament has 20 Standing Committees consisting of 25 to 73 MEPs each (reflecting the political make-up of the whole Parliament) including a chair, a bureau and secretariat. They meet twice a month in public to draw up, amend to adopt legislative proposals and reports to be presented to the plenary. The rapporteurs for a committee are supposed to present the view of the committee, although notably this has not always been the case. In the events leading to the resignation of the Santer Commission, the rapporteur went against the Budgetary Control Committee's narrow vote to discharge the budget, and urged the Parliament to reject it. Committees can also set up sub-committees (e.g. the Subcommittee on Human Rights) and temporary committees to deal with a specific topic (e.g. on extraordinary rendition). The chairs of the Committees co-ordinate their work through the "Conference of Committee Chairmen". When co-decision was introduced it increased the Parliament's powers in a number of areas, but most notably those covered by the Committee on the Environment, Public Health and Food Safety. Previously this committee was considered by MEPs as a "Cinderella committee"; however, as it gained a new importance, it became more professional and rigorous, attracting increasing attention to its work. The nature of the committees differ from their national counterparts as, although smaller in comparison to those of the United States Congress, the European Parliament's committees are unusually large by European standards with between eight and twelve dedicated members of staff and three to four support staff. Considerable administration, archives and research resources are also at the disposal of the whole Parliament when needed. Delegations of the Parliament are formed in a similar manner and are responsible for relations with Parliaments outside the EU. There are 34 delegations made up of around 15 MEPs, chairpersons of the delegations also cooperate in a conference like the committee chairs do. They include "Interparliamentary delegations" (maintain relations with Parliament outside the EU), "joint parliamentary committees" (maintaining relations with parliaments of states which are candidates or associates of the EU), the delegation to the ACP EU Joint Parliamentary Assembly and the delegation to the Euro-Mediterranean Parliamentary Assembly. MEPs also participate in other international activities such as the Euro-Latin American Parliamentary Assembly, the Transatlantic Legislators' Dialogue and through election observation in third countries. The Intergroups in the European Parliament are informal fora which gather MEPs from various political groups around any topic. They do not express the view of the European Parliament. They serve a double purpose: to address a topic which is transversal to several committees and in a less formal manner. Their daily secretariat can be run either through the office of MEPs or through interest groups, be them corporate lobbies or NGOs. The favored access to MEPs which the organization running the secretariat enjoys can be one explanation to the multiplication of Intergroups in the 1990s. They are now strictly regulated and financial support, direct or otherwise (via Secretariat staff, for example) must be officially specified in a declaration of financial interests. Also Intergroups are established or renewed at the beginning of each legislature through a specific process. Indeed, the proposal for the constitution or renewal of an Intergroup must be supported by at least 3 political groups whose support is limited to a specific number of proposals in proportion to their size (for example, for the legislature 2014–2019, the EPP or S&D political groups could support 22 proposals whereas the Greens/EFA or the EFDD political groups only 7). Speakers in the European Parliament are entitled to speak in any of the 24 official languages of the European Union, ranging from French and German to Maltese and Irish. Simultaneous interpreting is offered in all plenary sessions, and all final texts of legislation are translated. With twenty-four languages, the European Parliament is the most multilingual parliament in the world and the biggest employer of interpreters in the world (employing 350 full-time and 400 freelancers when there is higher demand). Citizens may also address the Parliament in Basque, Catalan/Valencian and Galician. Usually a language is translated from a foreign tongue into a translator's native tongue. Due to the large number of languages, some being minor ones, since 1995 interpreting is sometimes done the opposite way, out of an interpreter's native tongue (the "retour" system). In addition, a speech in a minor language may be interpreted through a third language for lack of interpreters ("relay" interpreting) – for example, when interpreting out of Estonian into Maltese. Due to the complexity of the issues, interpretation is not word for word. Instead, interpreters have to convey the political meaning of a speech, regardless of their own views. This requires detailed understanding of the politics and terms of the Parliament, involving a great deal of preparation beforehand (e.g. reading the documents in question). Difficulty can often arise when MEPs use profanities, jokes and word play or speak too fast. While some see speaking their native language as an important part of their identity, and can speak more fluently in debates, interpretation and its cost has been criticised by some. A 2006 report by Alexander Stubb MEP highlighted that by only using English, French and German costs could be reduced from €118,000 per day (for 21 languages then – Romanian, Bulgarian and Croatian having not yet been included) to €8,900 per day. There has also been a small-scale campaign to make French the reference language for all legal texts, on the basis of an argument that it is more clear and precise for legal purposes. Because the proceedings are translated into all of the official EU languages, they have been used to make a multilingual corpus known as Europarl. It is widely used to train statistical machine translation systems. On 12 December 2022, President Metsola announced that all work with Qatar would be suspended. A European Union correspondent, Jack Parrock confirmed on the basis of close sources to Qatar that the UAE was involved in plotting the corruption scandal. Parrock said the official investigations, leaked documents and a number of European sources have confirmed the Emirati involvement in planning the alleged bribery case against Qatar. In a separate report by The European Microscope, documents revealed that the UAE made extensive efforts to woo multiple members of the European Parliament. Abu Dhabi intensified the lobbying campaign to build its support within the European Parliament and to push its members to speak in favour of the Emirates. After Qatari officials, an Italian web publisher, Dagospia alleged that the UAE's plan against Qatar involved Tahnoun bin Zayed. It is alleged that the brother of UAE President Mohamed bin Zayed Al Nahyan gave Belgium the tips that lead to the investigations against Qatar. At the same meeting the Greens–European Free Alliance and Renew Europe both called for an inquiry committee to be set up by the European Parliament. The suspension of Parliamentary business at this time is significant as it comes just three days before the Parliament was due to vote on introducing a visa-free travel agreement with Qatar and other countries. This resulted in the vote on visa-free travel to Ecuador, Kuwait, and Oman also being canceled. In addition, a major and controversial air transit agreement that would have allowed Qatar Airways unlimited access to the EU market was put on hold after warning that Qatar may have interfered in Parliament's internal deliberations on the agreement. During the negotiations there was criticism by EU member states that the agreement, negotiated by the Parliament's transport committee, was unduly favourable of Qatar. On 16 December 2022, an article published by Politico, elucidate the connection between Antonio Panzeri and Abderrahim Atmoun. Pier Antonio Panzeri a former Italian member of the European parliament who headed the assembly of Maghreb delegation and Abderrahim Atmoun, his co-president of EU-Morocco joint parliamentary committee attended an award ceremony in 2014 where they were awarded by the king Mohammed VI of Morocco. Atmoun, now Morocco's ambassador to Warsaw posted some pictures from the ceremony with the king and also posted the series of pictures showcasing the long-term association between him and Panzeri – a man whom he publicly claims as his friend from as long as 2011. The later pictures also involves Francesco Giorgi where three of them can be seen sitting together at the meeting room. Later in 2022, the three men caught in the biggest corruption scandal as Belgium launches an investigation into whether Qatar and Morocco bought the influence in the European parliament. Panzeri and Giorgi, along with his partner Eva kaili are in jail facing preliminary charges of corruption. Also warrants were issued against Panzeri's wife and daughter in connection with influence buying which mentions the Atmoun giving gifts to them. The Lawyers have declined to comment and Morocco's embassies in Warsaw and Brussel are not responding to calls. Panzeri's wife and daughter also denied any wrongdoings. President of the European Parliament Roberta Metsola issued a statement in January 2023 stating that she had moved to remove parliamentary immunity from two MEPs implicated in the ongoing corruption scandal after receiving a request from the Belgian police. The European Parliament has had criticism over its prodigality and for being too complacent with conflicts of interest. Its refusal to become full member of the GRECO like all its member states is also a matter of criticism. According to the European Parliament website, the annual parliament budget for 2021 was €2.064 billion, which corresponds to 1.2% of EU budget. The main cost categories were: According to a European Parliament study prepared in 2013, the Strasbourg seat costs an extra €103 million over maintaining a single location and according to the Court of Auditors an additional €5 million is related to travel expenses caused by having two seats. As a comparison, the German lower house of parliament (Bundestag) is estimated to cost €517 million in total for 2018, for a parliament with 709 members. The British House of Commons reported total annual costs in 2016-2017 of £249 million (€279 million). It had 650 seats. According to The Economist, the European Parliament costs more than the British, French and German parliaments combined. A quarter of the costs is estimated to be related to translation and interpretation costs (c. €460 million) and the double seats are estimated to add an additional €180 million a year. For a like-for-like comparison, these two cost blocks can be excluded. On 2 July 2018, MEPs rejected proposals to tighten the rules around the General Expenditure Allowance (GEA), which "is a controversial €4,416 per month payment that MEPs are given to cover office and other expenses, but they are not required to provide any evidence of how the money is spent". The Parliament is based in three different cities with numerous buildings. A protocol attached to the Treaty of Amsterdam requires that 12 plenary sessions be held in Strasbourg (none in August but two in October), which is the Parliament's official seat, while extra part sessions as well as committee meetings are held in Brussels. Luxembourg City hosts the Secretariat of the European Parliament. The European Parliament is one of at least two assemblies in the world with more than one meeting place (another being the parliament of the Isle of Man, Tynwald) and one of the few that does not have the power to decide its own location. The Strasbourg seat is seen as a symbol of reconciliation between France and Germany, the Strasbourg region having been fought over by the two countries in the past. However, the cost and inconvenience of having two seats is questioned. While Strasbourg is the official seat, and sits alongside the Council of Europe, Brussels is home to nearly all other major EU institutions, with the majority of Parliament's work being carried out there. Critics have described the two-seat arrangement as a "travelling circus", and there is a strong movement to establish Brussels as the sole seat. This is because the other political institutions (the commission, Council and European Council) are located there, and hence Brussels is treated as the 'capital' of the EU. This movement has received strong backing from numerous figures, including Margot Wallström, Commission First-Vice President from 2004 to 2010, who stated that "something that was once a very positive symbol of the EU reuniting France and Germany has now become a negative symbol – of wasting money, bureaucracy and the insanity of the Brussels institutions". The Green Party has also noted the environmental cost in a study led by Jean Lambert MEP and Caroline Lucas MEP; in addition to the extra 200 million euro spent on the extra seat, there are over 20,268 tonnes of additional carbon dioxide, undermining any environmental stance of the institution and the Union. The campaign is further backed by a million-strong online petition started by Cecilia Malmström MEP. In August 2014, an assessment by the European Court of Auditors calculated that relocating the Strasbourg seat of the European Parliament to Brussels would save €113.8 million per year. In 2006, there were allegations of irregularities in the charges made by the city of Strasbourg on buildings the Parliament rented, thus further harming the case for the Strasbourg seat. Most MEPs prefer Brussels as a single base. A poll of MEPs found 89% of the respondents wanting a single seat, and 81% preferring Brussels. Another survey found 68% support. In July 2011, an absolute majority of MEPs voted in favour of a single seat. In early 2011, the Parliament voted to scrap one of the Strasbourg sessions by holding two within a single week. The mayor of Strasbourg officially reacted by stating "we will counter-attack by upturning the adversary's strength to our own profit, as a judoka would do". However, as Parliament's seat is now fixed by the treaties, it can only be changed by the Council acting unanimously, meaning that France could veto any move. Former French President Nicolas Sarkozy stated that the Strasbourg seat is "non-negotiable" and that France has no intention of surrendering the only EU Institution on French soil. Given France's declared intention to veto any relocation to Brussels, some MEPs have advocated civil disobedience by refusing to take part in the monthly exodus to Strasbourg. However, the main building in Brussels has been suffering for more than a decade from a state of degradation. Renovation or reconstruction works including an hemicycle were estimated to cost at least €500 million in 2017 with fear that the cost would be even higher and possibly escalate up to €1 billion, whereas the seat in Strasbourg already offers a fully-fledged hemicycle. Over the last few years, European institutions have committed to promoting transparency, openness, and the availability of information about their work. In particular, transparency is regarded as pivotal to the action of European institutions and a general principle of EU law, to be applied to the activities of EU institutions in order to strengthen the Union's democratic foundation. The general principles of openness and transparency are reaffirmed in the articles 8 A, point 3 and 10.3 of the Treaty of Lisbon and the Maastricht Treaty respectively, stating that "every citizen shall have the right to participate in the democratic life of the Union. Decisions shall be taken as openly and as closely as possible to the citizen". Furthermore, both treaties acknowledge the value of dialogue between citizens, representative associations, civil society, and European institutions. Article 17 of the Treaty on the Functioning of the European Union (TFEU) lays the juridical foundation for an open, transparent dialogue between European institutions and churches, religious associations, and non-confessional and philosophical organisations. In July 2014, in the beginning of the 8th term, then President of the European Parliament Martin Schulz tasked Antonio Tajani, then vice-president, with implementing the dialogue with the religious and confessional organisations included in article 17. In this framework, the European Parliament hosts high-level conferences on inter-religious dialogue, also with focus on current issues and in relation with parliamentary works. The chair of European Parliament Mediator for International Parental Child Abduction was established in 1987 by initiative of British MEP Charles Henry Plumb, with the goal of helping minor children of international couples victim of parental abduction. The Mediator finds negotiated solutions in the higher interest of the minor when said minor is abducted by a parent following separation of the couple, regardless whether married or unmarried. Since its institution, the chair has been held by Mairead McGuinness (since 2014), Roberta Angelilli (2009–2014), Evelyne Gebhardt (2004–2009), Mary Banotti (1995–2004), and Marie-Claude Vayssade (1987–1994). The Mediator's main task is to assist parents in finding a solution in the minor's best interest through mediation, i.e. a form of controversy resolution alternative to lawsuit. The Mediator is activated by request of a citizen and, after evaluating the request, starts a mediation process aimed at reaching an agreement. Once subscribed by both parties and the Mediator, the agreement is official. The nature of the agreement is that of a private contract between parties. In defining the agreement, the European Parliament offers the parties the juridical support necessary to reach a sound, lawful agreement based on legality and equity. The agreement can be ratified by the competent national courts and can also lay the foundation for consensual separation or divorce. The European Parliamentary Research Service (EPRS) is the European Parliament's in-house research department and think tank. It provides Members of the European Parliament – and, where appropriate, parliamentary committees – with independent, objective and authoritative analysis of, and research on, policy issues relating to the European Union, in order to assist them in their parliamentary work. It is also designed to increase Members' and EP committees' capacity to scrutinise and oversee the European Commission and other EU executive bodies. EPRS aims to provide a comprehensive range of products and services, backed by specialist internal expertise and knowledge sources in all policy fields, so empowering Members and committees through knowledge and contributing to the Parliament's effectiveness and influence as an institution. In undertaking this work, the EPRS supports and promotes parliamentary outreach to the wider public, including dialogue with relevant stakeholders in the EU's system of multi-level governance. All EPRS publications are publicly available on the EP Think Tank platform. The European Parliament periodically commissions opinion polls and studies on public opinion trends in Member States to survey perceptions and expectations of citizens about its work and the overall activities of the European Union. Topics include citizens' perception of the European Parliament's role, their knowledge of the institution, their sense of belonging in the European Union, opinions on European elections and European integration, identity, citizenship, political values, but also on current issues such as climate change, current economy and politics, etc. Eurobarometer analyses seek to provide an overall picture of national situations, regional specificities, socio-demographic cleavages, and historical trends. With the Sakharov Prize, created in 1988, the European Parliament supports human rights by awarding individuals that contribute to promoting human rights worldwide, thus raising awareness on human rights violations. Priorities include: protection of human rights and fundamental liberties, with particular focus on freedom of expression; protection of minority rights; compliance with international law; and development of democracy and authentic rule of law. The European Charlemagne Youth Prize seeks to encourage youth participation in the European integration process. It is awarded by the European Parliament and the Foundation of the International Charlemagne Prize of Aachen to youth projects aimed at nurturing common European identity and European citizenship. The European Citizens' Prize is awarded by the European Parliament to activities and actions carried out by citizens and associations to promote integration between the citizens of EU member states and transnational cooperation projects in the EU. Since 2007, the LUX Prize is awarded by the European Parliament to films dealing with current topics of public European interest that encourage reflection on Europe and its future. Over time, the Lux Prize has become a prestigious cinema award which supports European film and production also outside the EU. From 2021, the Daphne Caruana Galizia Journalism prize shall be awarded by the European Parliament to outstanding journalism that reflect EU values. The prize consists in an award of 20,000 euros and the very first winner will be revealed in October 2021. This award is named after the late Maltese journalist, Daphne Caruana Galizia who was assassinated in Malta on 16 October 2017. In 2021 the prize was awarded to the Pegasus Project.
[ { "paragraph_id": 0, "text": "The European Parliament (EP) is one of the legislative bodies of the European Union and one of its seven institutions. Together with the Council of the European Union (known as the Council and informally as the Council of Ministers), it adopts European legislation, following a proposal by the European Commission. The Parliament is composed of 705 members (MEPs). It represents the second-largest democratic electorate in the world (after the Parliament of India), with an electorate of 375 million eligible voters in 2009.", "title": "" }, { "paragraph_id": 1, "text": "Since 1979, the Parliament has been directly elected every five years by the citizens of the European Union through universal suffrage. Voter turnout in parliamentary elections decreased each time after 1979 until 2019, when voter turnout increased by eight percentage points, and rose above 50% for the first time since 1994. The voting age is 18 in all EU member states except for Malta, Austria and Germany, where it is 16, and Greece, where it is 17. Belgian citizens can request to vote from the age of 16 as well.", "title": "" }, { "paragraph_id": 2, "text": "Although the European Parliament has legislative power, as does the Council, it does not formally possess the right of initiative as most national parliaments of the member states do, with the right of initiative being solely a prerogative of the European Commission. The Parliament is the \"first institution\" of the European Union (mentioned first in its treaties and having ceremonial precedence over the other EU institutions), and shares equal legislative and budgetary powers with the Council (except on a few issues where special legislative procedures apply). It likewise has equal control over the EU budget. Ultimately, the European Commission, which serves as the executive branch of the EU, is accountable to Parliament. In particular, Parliament can decide whether or not to approve the European Council's nominee for President of the Commission, and is further tasked with approving (or rejecting) the appointment of the commission as a whole. It can subsequently force the current Commission to resign by adopting a motion of censure.", "title": "" }, { "paragraph_id": 3, "text": "The president of the European Parliament is the body's speaker and presides over the multi-party chamber. The five largest political groups are the European People's Party Group (EPP), the Progressive Alliance of Socialists and Democrats (S&D), Renew Europe (previously ALDE), the Greens/European Free Alliance (Greens/EFA) and Identity and Democracy (ID). The last EU-wide election was held in 2019.", "title": "" }, { "paragraph_id": 4, "text": "The Parliament's headquarters are in Strasbourg, France, and has its administrative offices in Luxembourg City. Plenary sessions are \"normally held in Strasbourg for four days a month, but sometimes there are additional sessions in Brussels\", while the Parliament's committee meetings are held primarily in Brussels, Belgium.", "title": "" }, { "paragraph_id": 5, "text": "The Parliament, like the other EU institutions, was not designed in its current form when it first met on 10 September 1952. One of the oldest common institutions, it began as the Common Assembly of the European Coal and Steel Community (ECSC). It was a consultative assembly of 78 appointed parliamentarians drawn from the national parliaments of member states, having no legislative powers. The change since its foundation was highlighted by Professor David Farrell of the University of Manchester: \"For much of its life, the European Parliament could have been justly labelled a 'multi-lingual talking shop'.\"", "title": "History" }, { "paragraph_id": 6, "text": "Its development since its foundation shows how the European Union's structures have evolved without a clear 'master plan'. Tom Reid of The Washington Post has said of the union that \"nobody would have deliberately designed a government as complex and as redundant as the EU\". Even the Parliament's three working locations, which have switched several times, are a result of various agreements or lack of agreements. Although most MEPs would prefer to be based just in Brussels, at John Major's 1992 Edinburgh summit, France engineered a treaty amendment to confirm the European Parliament's seat permanently in Strasbourg.", "title": "History" }, { "paragraph_id": 7, "text": "The body was not mentioned in the original Schuman Declaration. It was assumed or hoped that difficulties with the British would be resolved to allow the Parliamentary Assembly of the Council of Europe to perform legislative tasks. A separate Assembly was introduced during negotiations on the Treaty as an institution to counterbalance and monitor the executive while providing democratic legitimacy. The wording of the ECSC Treaty demonstrated leaders' desire for more than a normal consultative assembly by allowing for direct election and using the term \"representatives of the people\". Its early importance was highlighted when the Assembly was given the task of drawing up the draft treaty to establish a European Political Community. By this document, the Ad Hoc Assembly was established on 13 September 1952 with extra members, but after the failure of the negotiated and proposed European Defence Community (French parliament veto), the project was dropped.", "title": "History" }, { "paragraph_id": 8, "text": "Despite this, the European Economic Community and Euratom were established in 1958 by the Treaties of Rome. The Common Assembly was shared by all three communities (which had separate executives) and it renamed itself the European Parliamentary Assembly. The first meeting was held on 19 March 1958 having been set up in Luxembourg City, it elected Schuman as its president and on 13 May it rearranged itself to sit according to political ideology rather than nationality. This is seen as the birth of the modern European Parliament, with Parliament's 50 years celebrations being held in March 2008 rather than 2002.", "title": "History" }, { "paragraph_id": 9, "text": "The three communities merged their remaining organs as the European Communities in 1967, and the body's name was changed to the current \"European Parliament\" in 1962. In 1970 the Parliament was granted power over areas of the Communities' budget, which were expanded to the whole budget in 1975. Under the Rome Treaties, the Parliament should have become elected. However, the Council was required to agree a uniform voting system beforehand, which it failed to do. The Parliament threatened to take the Council to the European Court of Justice; this led to a compromise whereby the Council would agree to elections, but the issue of voting systems would be put off until a later date.", "title": "History" }, { "paragraph_id": 10, "text": "For its sessions the assembly, and later the parliament, until 1999 convened in the same premises as the Parliamentary Assembly of the Council of Europe: the House of Europe until 1977, and the Palace of Europe until 1999.", "title": "History" }, { "paragraph_id": 11, "text": "In 1979, its members were directly elected for the first time. This sets it apart from similar institutions such as those of the Parliamentary Assembly of the Council of Europe or Pan-African Parliament which are appointed. After that first election, the parliament held its first session on 17 July 1979, electing Simone Veil MEP as its president. Veil was also the first female president of the Parliament since it was formed as the Common Assembly.", "title": "History" }, { "paragraph_id": 12, "text": "As an elected body, the Parliament began to draft proposals addressing the functioning of the EU. For example, in 1984, inspired by its previous work on the Political Community, it drafted the \"draft Treaty establishing the European Union\" (also known as the 'Spinelli Plan' after its rapporteur Altiero Spinelli MEP). Although it was not adopted, many ideas were later implemented by other treaties. Furthermore, the Parliament began holding votes on proposed Commission Presidents from the 1980s, before it was given any formal right to veto.", "title": "History" }, { "paragraph_id": 13, "text": "Since it became an elected body, the membership of the European Parliament has simply expanded whenever new nations have joined (the membership was also adjusted upwards in 1994 after German reunification). Following this, the Treaty of Nice imposed a cap on the number of members to be elected: 732.", "title": "History" }, { "paragraph_id": 14, "text": "Like the other institutions, the Parliament's seat was not yet fixed. The provisional arrangements placed Parliament in Strasbourg, while the Commission and Council had their seats in Brussels. In 1985 the Parliament, wishing to be closer to these institutions, built a second chamber in Brussels and moved some of its work there despite protests from some states. A final agreement was eventually reached by the European Council in 1992. It stated the Parliament would retain its formal seat in Strasbourg, where twelve sessions a year would be held, but with all other parliamentary activity in Brussels. This two-seat arrangement was contested by the Parliament, but was later enshrined in the Treaty of Amsterdam. To this day the institution's locations are a source of contention.", "title": "History" }, { "paragraph_id": 15, "text": "The Parliament gained more powers from successive treaties, namely through the extension of the ordinary legislative procedure (then called the codecision procedure), and in 1999, the Parliament forced the resignation of the Santer Commission. The Parliament had refused to approve the Community budget over allegations of fraud and mis-management in the commission. The two main parties took on a government-opposition dynamic for the first time during the crisis which ended in the Commission resigning en masse, the first of any forced resignation, in the face of an impending censure from the Parliament.", "title": "History" }, { "paragraph_id": 16, "text": "In 2004, following the largest trans-national election in history, despite the European Council choosing a President from the largest political group (the EPP), the Parliament again exerted pressure on the commission. During the Parliament's hearings of the proposed Commissioners MEPs raised doubts about some nominees with the Civil Liberties committee rejecting Rocco Buttiglione from the post of Commissioner for Justice, Freedom and Security over his views on homosexuality. That was the first time the Parliament had ever voted against an incoming Commissioner and despite Barroso's insistence upon Buttiglione the Parliament forced Buttiglione to be withdrawn. A number of other Commissioners also had to be withdrawn or reassigned before Parliament allowed the Barroso Commission to take office.", "title": "History" }, { "paragraph_id": 17, "text": "Along with the extension of the ordinary legislative procedure, the Parliament's democratic mandate has given it greater control over legislation against the other institutions. In voting on the Bolkestein directive in 2006, the Parliament voted by a large majority for over 400 amendments that changed the fundamental principle of the law. The Financial Times described it in the following terms:", "title": "History" }, { "paragraph_id": 18, "text": "That is where the European parliament has suddenly come into its own. It marks another shift in power between the three central EU institutions. Last week's vote suggests that the directly elected MEPs, in spite of their multitude of ideological, national and historical allegiances, have started to coalesce as a serious and effective EU institution, just as enlargement has greatly complicated negotiations inside both the Council and Commission.", "title": "History" }, { "paragraph_id": 19, "text": "In 2007, for the first time, Justice Commissioner Franco Frattini included Parliament in talks on the second Schengen Information System even though MEPs only needed to be consulted on parts of the package. After that experiment, Frattini indicated he would like to include Parliament in all justice and criminal matters, informally pre-empting the new powers they were due to gain in 2009 as part of the Treaty of Lisbon. Between 2007 and 2009, a special working group on parliamentary reform implemented a series of changes to modernise the institution such as more speaking time for rapporteurs, increased committee co-operation and other efficiency reforms.", "title": "History" }, { "paragraph_id": 20, "text": "The Lisbon Treaty came into force on 1 December 2009, granting Parliament powers over the entire EU budget, making Parliament's legislative powers equal to the Council's in nearly all areas and linking the appointment of the Commission President to Parliament's own elections. Barroso gained the support of the European Council for a second term and secured majority support from the Parliament in September 2009. Parliament voted 382 votes in favour and 219 votes against (117 abstentions) with support of the European People's Party, European Conservatives and Reformists and the Alliance of Liberals and Democrats for Europe. The liberals gave support after Barroso gave them a number of concessions; the liberals previously joined the socialists' call for a delayed vote (the EPP had wanted to approve Barroso in July of that year).", "title": "History" }, { "paragraph_id": 21, "text": "Once Barroso put forward the candidates for his next Commission, another opportunity to gain concessions arose. Bulgarian nominee Rumiana Jeleva was forced to step down by Parliament due to concerns over her experience and financial interests. She only had the support of the EPP which began to retaliate on left wing candidates before Jeleva gave in and was replaced (setting back the final vote further).", "title": "History" }, { "paragraph_id": 22, "text": "Before the final vote, Parliament demanded a number of concessions as part of a future working agreement under the new Lisbon Treaty. The deal includes that Parliament's president will attend high level Commission meetings. Parliament will have a seat in the EU's Commission-led international negotiations and have a right to information on agreements. However, Parliament secured only an observer seat. Parliament also did not secure a say over the appointment of delegation heads and special representatives for foreign policy. Although they will appear before parliament after they have been appointed by the High Representative. One major internal power was that Parliament wanted a pledge from the Commission that it would put forward legislation when parliament requests. Barroso considered this an infringement on the commission's powers but did agree to respond within three months. Most requests are already responded to positively.", "title": "History" }, { "paragraph_id": 23, "text": "During the setting up of the European External Action Service (EEAS), Parliament used its control over the EU budget to influence the shape of the EEAS. MEPs had aimed at getting greater oversight over the EEAS by linking it to the commission and having political deputies to the High Representative. MEPs did not manage to get everything they demanded. However, they got broader financial control over the new body. In December 2017, Politico denounced the lack of racial diversity among Members of the European Parliament. The subsequent news coverage contributed to create the Brussels So White movement. In January 2019, Conservative MEPs supported proposals to boost opportunities for women and tackle sexual harassment in the European Parliament.", "title": "History" }, { "paragraph_id": 24, "text": "In 2022, four people were arrested because of corruption. This came to be known as the Qatar corruption scandal at the European Parliament.", "title": "History" }, { "paragraph_id": 25, "text": "In October 2023, the Parliament adopted a resolution to condemn \"Hamas' despicable terrorist attacks against Israel\".", "title": "History" }, { "paragraph_id": 26, "text": "The Parliament and Council have been compared to the two chambers of a bicameral legislature. However, there are some differences from national legislatures; for example, neither the Parliament nor the Council have the power of legislative initiative (except for the fact that the Council has the power in some intergovernmental matters). In Community matters, this is a power uniquely reserved for the European Commission (the executive). Therefore, while Parliament can amend and reject legislation, to make a proposal for legislation, it needs the commission to draft a bill before anything can become law. The value of such a power has been questioned by noting that in the national legislatures of the member states 85% of initiatives introduced without executive support fail to become law. Yet it has been argued by former Parliament president Hans-Gert Pöttering that as the Parliament does have the right to ask the commission to draft such legislation, and as the commission is following Parliament's proposals more and more Parliament does have a de facto right of legislative initiative.", "title": "Powers and functions" }, { "paragraph_id": 27, "text": "The Parliament also has a great deal of indirect influence, through non-binding resolutions and committee hearings, as a \"pan-European soapbox\" with the ear of thousands of Brussels-based journalists. There is also an indirect effect on foreign policy; the Parliament must approve all development grants, including those overseas. For example, the support for post-war Iraq reconstruction, or incentives for the cessation of Iranian nuclear development, must be supported by the Parliament. Parliamentary support was also required for the transatlantic passenger data-sharing deal with the United States. Finally, Parliament holds a non-binding vote on new EU treaties but cannot veto it. However, when Parliament threatened to vote down the Nice Treaty, the Belgian and Italian Parliaments said they would veto the treaty on the European Parliament's behalf.", "title": "Powers and functions" }, { "paragraph_id": 28, "text": "With each new treaty, the powers of the Parliament, in terms of its role in the Union's legislative procedures, have expanded. The procedure which has slowly become dominant is the \"ordinary legislative procedure\" (previously named \"codecision procedure\"), which provides an equal footing between Parliament and Council. In particular, under the procedure, the Commission presents a proposal to Parliament and the Council which can only become law if both agree on a text, which they do (or not) through successive readings up to a maximum of three. In its first reading, Parliament may send amendments to the Council which can either adopt the text with those amendments or send back a \"common position\". That position may either be approved by Parliament, or it may reject the text by an absolute majority, causing it to fail, or it may adopt further amendments, also by an absolute majority. If the Council does not approve these, then a \"Conciliation Committee\" is formed. The committee is composed of the Council members plus an equal number of MEPs who seek to agree a compromise. Once a position is agreed, it has to be approved by Parliament, by a simple majority. This is also aided by Parliament's mandate as the only directly democratic institution, which has given it leeway to have greater control over legislation than other institutions, for example over its changes to the Bolkestein directive in 2006.", "title": "Powers and functions" }, { "paragraph_id": 29, "text": "The few other areas that operate the special legislative procedures are justice and home affairs, budget and taxation, and certain aspects of other policy areas, such as the fiscal aspects of environmental policy. In these areas, the Council or Parliament decide law alone. The procedure also depends upon which type of institutional act is being used. The strongest act is a regulation, an act or law which is directly applicable in its entirety. Then there are directives which bind member states to certain goals which they must achieve. They do this through their own laws and hence have room to manoeuvre in deciding upon them. A decision is an instrument which is focused at a particular person or group and is directly applicable. Institutions may also issue recommendations and opinions which are merely non-binding, declarations. There is a further document which does not follow normal procedures, this is a \"written declaration\" which is similar to an early day motion used in the Westminster system. It is a document proposed by up to five MEPs on a matter within the EU's activities used to launch a debate on that subject. Having been posted outside the entrance to the hemicycle, members can sign the declaration and if a majority do so it is forwarded to the President and announced to the plenary before being forwarded to the other institutions and formally noted in the minutes.", "title": "Powers and functions" }, { "paragraph_id": 30, "text": "The legislative branch officially holds the Union's budgetary authority with powers gained through the Budgetary Treaties of the 1970s and the Lisbon Treaty. The EU budget is subject to a form of the ordinary legislative procedure with a single reading giving Parliament power over the entire budget (before 2009, its influence was limited to certain areas) on an equal footing to the Council. If there is a disagreement between them, it is taken to a conciliation committee as it is for legislative proposals. If the joint conciliation text is not approved, the Parliament may adopt the budget definitively.", "title": "Powers and functions" }, { "paragraph_id": 31, "text": "The Parliament is also responsible for discharging the implementation of previous budgets based on the annual report of the European Court of Auditors. It has refused to approve the budget only twice, in 1984 and in 1998. On the latter occasion it led to the resignation of the Santer Commission; highlighting how the budgetary power gives Parliament a great deal of power over the commission. Parliament also makes extensive use of its budgetary, and other powers, elsewhere; for example in the setting up of the European External Action Service, Parliament has a de facto veto over its design as it has to approve the budgetary and staff changes.", "title": "Powers and functions" }, { "paragraph_id": 32, "text": "The President of the European Commission is proposed by the European Council on the basis of the European elections to Parliament. That proposal has to be approved by the Parliament (by a simple majority) who \"elect\" the President according to the treaties. Following the approval of the Commission President, the members of the commission are proposed by the President in accord with the member states. Each Commissioner comes before a relevant parliamentary committee hearing covering the proposed portfolio. They are then, as a body, approved or rejected by the Parliament.", "title": "Powers and functions" }, { "paragraph_id": 33, "text": "In practice, the Parliament has never voted against a President or his Commission, but it did seem likely when the Barroso Commission was put forward. The resulting pressure forced the proposal to be withdrawn and changed to be more acceptable to parliament. That pressure was seen as an important sign by some of the evolving nature of the Parliament and its ability to make the Commission accountable, rather than being a rubber stamp for candidates. Furthermore, in voting on the commission, MEPs also voted along party lines, rather than national lines, despite frequent pressure from national governments on their MEPs. This cohesion and willingness to use the Parliament's power ensured greater attention from national leaders, other institutions and the public – who previously gave the lowest ever turnout for the Parliament's elections.", "title": "Powers and functions" }, { "paragraph_id": 34, "text": "The Parliament also has the power to censure the Commission if they have a two-thirds majority which will force the resignation of the entire Commission from office. As with approval, this power has never been used but it was threatened to the Santer Commission, who subsequently resigned of their own accord. There are a few other controls, such as: the requirement of Commission to submit reports to the Parliament and answer questions from MEPs; the requirement of the President-in-office of the Council to present its programme at the start of their presidency; the obligation on the President of the European Council to report to Parliament after each of its meetings; the right of MEPs to make requests for legislation and policy to the commission; and the right to question members of those institutions (e.g. \"Commission Question Time\" every Tuesday). At present, MEPs may ask a question on any topic whatsoever, but in July 2008 MEPs voted to limit questions to those within the EU's mandate and ban offensive or personal questions.", "title": "Powers and functions" }, { "paragraph_id": 35, "text": "The Parliament also has other powers of general supervision, mainly granted by the Maastricht Treaty. The Parliament has the power to set up a Committee of Inquiry, for example over mad cow disease or CIA detention flights – the former led to the creation of the European veterinary agency. The Parliament can call other institutions to answer questions and if necessary to take them to court if they break EU law or treaties. Furthermore, it has powers over the appointment of the members of the Court of Auditors and the president and executive board of the European Central Bank. The ECB president is also obliged to present an annual report to the parliament.", "title": "Powers and functions" }, { "paragraph_id": 36, "text": "The European Ombudsman is elected by the Parliament, who deals with public complaints against all institutions. Petitions can also be brought forward by any EU citizen on a matter within the EU's sphere of activities. The Committee on Petitions hears cases, some 1500 each year, sometimes presented by the citizen themselves at the Parliament. While the Parliament attempts to resolve the issue as a mediator they do resort to legal proceedings if it is necessary to resolve the citizens dispute.", "title": "Powers and functions" }, { "paragraph_id": 37, "text": "The parliamentarians are known in English as Members of the European Parliament (MEPs). They are elected every five years by universal adult suffrage and sit according to political allegiance. About one third are women. Before the first direct elections, in 1979, they were appointed by their national parliaments.", "title": "Members" }, { "paragraph_id": 38, "text": "The Parliament has been criticized for underrepresentation of minority groups. In 2017, an estimated 17 MEPs were non-white, and of these, three were black, a disproportionately low number. According to activist organization European Network Against Racism, while an estimated 10% of Europe is composed of racial and ethnic minorities, only 5% of MEPs were members of such groups following the 2019 European Parliament election.", "title": "Members" }, { "paragraph_id": 39, "text": "Under the Lisbon Treaty, seats are allocated to each state according to population and the maximum number of members is set at 751 (however, as the President cannot vote while in the chair there will only be 750 voting members at any one time). Since 1 February 2020 and the United Kingdom's leaving the EU, 705 MEPs (including the president of the Parliament) sit in the European Parliament.", "title": "Members" }, { "paragraph_id": 40, "text": "Representation is currently limited to a maximum of 96 seats and a minimum of 6 seats per state and the seats are distributed according to \"degressive proportionality\", i.e., the larger the state, the more citizens are represented per MEP. As a result, Maltese and Luxembourgish voters have roughly 10x more influence per voter than citizens of the six largest countries.", "title": "Members" }, { "paragraph_id": 41, "text": "As of 2014, Germany (80.9 million inhabitants) has 96 seats (previously 99 seats), i.e. one seat for 843,000 inhabitants. Malta (0.4 million inhabitants) has 6 seats, i.e. one seat for 70,000 inhabitants.", "title": "Members" }, { "paragraph_id": 42, "text": "The new system implemented under the Lisbon Treaty, including revising the seating well before elections, was intended to avoid political horse trading when the allocations have to be revised to reflect demographic changes.", "title": "Members" }, { "paragraph_id": 43, "text": "Pursuant to this apportionment, the constituencies are formed. In four EU member states (Belgium, Ireland, Italy and Poland), the national territory is divided into a number of constituencies. In the remaining member states, the whole country forms a single constituency. All member states hold elections to the European Parliament using various forms of proportional representation.", "title": "Members" }, { "paragraph_id": 44, "text": "Due to the delay in ratifying the Lisbon Treaty, the seventh parliament was elected under the lower Nice Treaty cap. A small scale treaty amendment was ratified on 29 November 2011. This amendment brought in transitional provisions to allow the 18 additional MEPs created under the Lisbon Treaty to be elected or appointed before the 2014 election. Under the Lisbon Treaty reforms, Germany was the only state to lose members from 99 to 96. However, these seats were not removed until the 2014 election.", "title": "Members" }, { "paragraph_id": 45, "text": "Before 2009, members received the same salary as members of their national parliament. However, from 2009 a new members statute came into force, after years of attempts, which gave all members an equal monthly pay, of €8,484.05 each in 2016, subject to a European Union tax and which can also be taxed nationally. MEPs are entitled to a pension, paid by Parliament, from the age of 63. Members are also entitled to allowances for office costs and subsistence, and travelling expenses, based on actual cost. Besides their pay, members are granted a number of privileges and immunities. To ensure their free movement to and from the Parliament, they are accorded by their own states the facilities accorded to senior officials travelling abroad and, by other state governments, the status of visiting foreign representatives. When in their own state, they have all the immunities accorded to national parliamentarians, and, in other states, they have immunity from detention and legal proceedings. However, immunity cannot be claimed when a member is found committing a criminal offence and the Parliament also has the right to strip a member of their immunity.", "title": "Members" }, { "paragraph_id": 46, "text": "MEPs in Parliament are organised into eight different parliamentary groups, including thirty non-attached members known as non-inscrits. The two largest groups are the European People's Party (EPP) and the Socialists & Democrats (S&D). These two groups have dominated the Parliament for much of its life, continuously holding between 50 and 70 percent of the seats between them. No single group has ever held a majority in Parliament. As a result of being broad alliances of national parties, European group parties are very decentralised and hence have more in common with parties in federal states like Germany or the United States than unitary states like the majority of the EU states. Nevertheless, the European groups were actually more cohesive than their US counterparts between 2004 and 2009.", "title": "Members" }, { "paragraph_id": 47, "text": "Groups are often based on a single European political party such as the European People's Party. However, they can, like the liberal group, include more than one European party as well as national parties and independents. For a group to be recognised, it needs 23 MEPs from seven different countries. Groups receive funding from the parliament.", "title": "Members" }, { "paragraph_id": 48, "text": "Given that the Parliament does not form the government in the traditional sense of a Parliamentary system, its politics have developed along more consensual lines with dynamical coalitions rather than majority rule of competing parties and coalitions. Indeed, for much of its life it has been dominated by a grand coalition of the European People's Party and the Party of European Socialists. The two major parties tend to co-operate to find a compromise between their two groups leading to proposals endorsed by huge majorities. However, this does not always produce agreement, and each may instead try to build other alliances, the EPP normally with other centre-right or right wing Groups and the PES with centre-left or left wing groups. Sometimes, the Liberal Group is then in the pivotal position. There are also occasions where very sharp party political divisions have emerged, for example over the resignation of the Santer Commission.", "title": "Members" }, { "paragraph_id": 49, "text": "When the initial allegations against the Commission emerged, they were directed primarily against Édith Cresson and Manuel Marín, both socialist members. When the parliament was considering refusing to discharge the Community budget, President Jacques Santer stated that a no vote would be tantamount to a vote of no confidence. The Socialist group supported the commission and saw the issue as an attempt by the EPP to discredit their party ahead of the 1999 elections. Socialist leader, Pauline Green MEP, attempted a vote of confidence and the EPP put forward counter motions. During this period the two parties took on similar roles to a government-opposition dynamic, with the Socialists supporting the executive and EPP renouncing its previous coalition support and voting it down. Politicisation such as this has been increasing, in 2007 Simon Hix of the London School of Economics noted that:", "title": "Members" }, { "paragraph_id": 50, "text": "Our work also shows that politics in the European Parliament is becoming increasingly based around party and ideology. Voting is increasingly split along left-right lines, and the cohesion of the party groups has risen dramatically, particularly in the fourth and fifth parliaments. So there are likely to be policy implications here too.", "title": "Members" }, { "paragraph_id": 51, "text": "During the fifth term, 1999 to 2004, there was a break in the grand coalition resulting in a centre-right coalition between the Liberal and People's parties. This was reflected in the Presidency of the Parliament with the terms being shared between the EPP and the ELDR, rather than the EPP and Socialists. In the following term the liberal group grew to hold 88 seats, the largest number of seats held by any third party in Parliament. The EPP-S&D coalition lost their majority after the 2019 European Parliament election, requiring support by other political groups for a majority.", "title": "Members" }, { "paragraph_id": 52, "text": "Elections have taken place, directly in every member state, every five years since 1979. As of 2019 there have been nine elections. When a nation joins mid-term, a by-election will be held to elect their representatives. This has happened six times, most recently when Croatia joined in 2013. Elections take place across four days according to local custom and, apart from having to be proportional, the electoral system is chosen by the member state. This includes allocation of sub-national constituencies; while most members have a national list, some divide their allocation between regions. Seats are allocated to member states according to their population, since 2014 with no state having more than 96, but no fewer than 6, to maintain proportionality.", "title": "Members" }, { "paragraph_id": 53, "text": "The most recent Union-wide elections to the European Parliament were the European elections of 2019, held from 23 to 26 May 2019. They were the largest simultaneous transnational elections ever held anywhere in the world. The first session of the ninth parliament started 2 July 2019.", "title": "Members" }, { "paragraph_id": 54, "text": "European political parties have the exclusive right to campaign during the European elections (as opposed to their corresponding EP groups). There have been a number of proposals designed to attract greater public attention to the elections. One such innovation in the 2014 elections was that the pan-European political parties fielded \"candidates\" for president of the Commission, the so-called Spitzenkandidaten (German, \"leading candidates\" or \"top candidates\"). However, European Union governance is based on a mixture of intergovernmental and supranational features: the President of the European Commission is nominated by the European Council, representing the governments of the member states, and there is no obligation for them to nominate the successful \"candidate\". The Lisbon Treaty merely states that they should take account of the results of the elections when choosing whom to nominate. The so-called Spitzenkandidaten were Jean-Claude Juncker for the European People's Party, Martin Schulz for the Party of European Socialists, Guy Verhofstadt for the Alliance of Liberals and Democrats for Europe Party, Ska Keller and José Bové jointly for the European Green Party and Alexis Tsipras for the Party of the European Left.", "title": "Members" }, { "paragraph_id": 55, "text": "Turnout dropped consistently every year since the first election, and from 1999 until 2019 was below 50%. In 2007 both Bulgaria and Romania elected their MEPs in by-elections, having joined at the beginning of 2007. The Bulgarian and Romanian elections saw two of the lowest turnouts for European elections, just 28.6% and 28.3% respectively. This trend was interrupted in the 2019 election, when turnout increased by 8% EU-wide, rising to 50.6%, the highest since 1994.", "title": "Members" }, { "paragraph_id": 56, "text": "In England, Scotland and Wales, EP elections were originally held for a constituency MEP on a first-past-the-post basis. In 1999 the system was changed to a form of proportional representation where a large group of candidates stand for a post within a very large regional constituency. One could vote for a party, but not a candidate (unless that party had a single candidate).", "title": "Members" }, { "paragraph_id": 57, "text": "Each year the activities of the Parliament cycle between committee weeks where reports are discussed in committees and interparliamentary delegations meet, political group weeks for members to discuss work within their political groups and session weeks where members spend 3½ days in Strasbourg for part-sessions. In addition six 2-day part-sessions are organised in Brussels throughout the year. Four weeks are allocated as constituency week to allow members to do exclusively constituency work. Finally there are no meetings planned during the summer weeks. The Parliament has the power to meet without being convened by another authority. Its meetings are partly controlled by the treaties but are otherwise up to Parliament according to its own \"Rules of Procedure\" (the regulations governing the parliament).", "title": "Proceedings" }, { "paragraph_id": 58, "text": "During sessions, members may speak after being called on by the President. Members of the Council or Commission may also attend and speak in debates. Partly due to the need for interpretation, and the politics of consensus in the chamber, debates tend to be calmer and more polite than, say, the Westminster system. Voting is conducted primarily by a show of hands, that may be checked on request by electronic voting. Votes of MEPs are not recorded in either case, however; that only occurs when there is a roll-call ballot. This is required for the final votes on legislation and also whenever a political group or 30 MEPs request it. The number of roll-call votes has increased with time. Votes can also be a completely secret ballot (for example, when the president is elected). All recorded votes, along with minutes and legislation, are recorded in the Official Journal of the European Union and can be accessed online. Votes usually do not follow a debate, but rather they are grouped with other due votes on specific occasions, usually at noon on Tuesdays, Wednesdays or Thursdays. This is because the length of the vote is unpredictable and if it continues for longer than allocated it can disrupt other debates and meetings later in the day.", "title": "Proceedings" }, { "paragraph_id": 59, "text": "Members are arranged in a hemicycle according to their political groups (in the Common Assembly, prior to 1958, members sat alphabetically) who are ordered mainly by left to right, but some smaller groups are placed towards the outer ring of the Parliament. All desks are equipped with microphones, headphones for translation and electronic voting equipment. The leaders of the groups sit on the front benches at the centre, and in the very centre is a podium for guest speakers. The remaining half of the circular chamber is primarily composed of the raised area where the President and staff sit. Further benches are provided between the sides of this area and the MEPs, these are taken up by the Council on the far left and the commission on the far right. Both the Brussels and Strasbourg hemicycle roughly follow this layout with only minor differences. The hemicycle design is a compromise between the different Parliamentary systems. The British-based system has the different groups directly facing each other while the French-based system is a semicircle (and the traditional German system had all members in rows facing a rostrum for speeches). Although the design is mainly based on a semicircle, the opposite ends of the spectrum do still face each other. With access to the chamber limited, entrance is controlled by ushers who aid MEPs in the chamber (for example in delivering documents). The ushers can also occasionally act as a form of police in enforcing the President, for example in ejecting an MEP who is disrupting the session (although this is rare). The first head of protocol in the Parliament was French, so many of the duties in the Parliament are based on the French model first developed following the French Revolution. The 180 ushers are highly visible in the Parliament, dressed in black tails and wearing a silver chain, and are recruited in the same manner as the European civil service. The President is allocated a personal usher.", "title": "Proceedings" }, { "paragraph_id": 60, "text": "The President is essentially the speaker of the Parliament and presides over the plenary when it is in session. The President's signature is required for all acts adopted by co-decision, including the EU budget. The President is also responsible for representing the Parliament externally, including in legal matters, and for the application of the rules of procedure. The President is elected for two-and-a-half-year terms, meaning two elections per parliamentary term. The current President of the European Parliament is Roberta Metsola, who was elected in January 2022.", "title": "Proceedings" }, { "paragraph_id": 61, "text": "In most countries, the protocol of the head of state comes before all others; however, in the EU the Parliament is listed as the first institution, and hence the protocol of its president comes before any other European, or national, protocol. The gifts given to numerous visiting dignitaries depend upon the President. President Josep Borrell MEP of Spain gave his counterparts a crystal cup created by an artist from Barcelona who had engraved upon it parts of the Charter of Fundamental Rights among other things.", "title": "Proceedings" }, { "paragraph_id": 62, "text": "A number of notable figures have been President of the Parliament and its predecessors. The first President was Paul-Henri Spaak MEP, one of the founding fathers of the Union. Other founding fathers include Alcide de Gasperi MEP and Robert Schuman MEP. The two female Presidents were Simone Veil MEP in 1979 (first President of the elected Parliament) and Nicole Fontaine MEP in 1999, both Frenchwomen. The previous president, Jerzy Buzek was the first East-Central European to lead an EU institution, a former Prime Minister of Poland who rose out of the Solidarity movement in Poland that helped overthrow communism in the Eastern Bloc.", "title": "Proceedings" }, { "paragraph_id": 63, "text": "During the election of a President, the previous President (or, if unable to, one of the previous vice-presidents) presides over the chamber. Prior to 2009, the oldest member fulfilled this role but the rule was changed to prevent far-right French MEP Jean-Marie Le Pen taking the chair.", "title": "Proceedings" }, { "paragraph_id": 64, "text": "Below the President, there are 14 Vice-Presidents who chair debates when the President is not in the chamber. There are a number of other bodies and posts responsible for the running of parliament besides these speakers. The two main bodies are the Bureau, which is responsible for budgetary and administration issues, and the Conference of Presidents which is a governing body composed of the presidents of each of the parliament's political groups. Looking after the financial and administrative interests of members are five Quaestors.", "title": "Proceedings" }, { "paragraph_id": 65, "text": "As of 2014, the European Parliament budget was EUR 1.756 billion. A 2008 report on the Parliament's finances highlighted certain overspending and miss-payments. Despite some MEPs calling for the report to be published, Parliamentary authorities had refused until an MEP broke confidentiality and leaked it.", "title": "Proceedings" }, { "paragraph_id": 66, "text": "The Parliament has 20 Standing Committees consisting of 25 to 73 MEPs each (reflecting the political make-up of the whole Parliament) including a chair, a bureau and secretariat. They meet twice a month in public to draw up, amend to adopt legislative proposals and reports to be presented to the plenary. The rapporteurs for a committee are supposed to present the view of the committee, although notably this has not always been the case. In the events leading to the resignation of the Santer Commission, the rapporteur went against the Budgetary Control Committee's narrow vote to discharge the budget, and urged the Parliament to reject it.", "title": "Proceedings" }, { "paragraph_id": 67, "text": "Committees can also set up sub-committees (e.g. the Subcommittee on Human Rights) and temporary committees to deal with a specific topic (e.g. on extraordinary rendition). The chairs of the Committees co-ordinate their work through the \"Conference of Committee Chairmen\". When co-decision was introduced it increased the Parliament's powers in a number of areas, but most notably those covered by the Committee on the Environment, Public Health and Food Safety. Previously this committee was considered by MEPs as a \"Cinderella committee\"; however, as it gained a new importance, it became more professional and rigorous, attracting increasing attention to its work.", "title": "Proceedings" }, { "paragraph_id": 68, "text": "The nature of the committees differ from their national counterparts as, although smaller in comparison to those of the United States Congress, the European Parliament's committees are unusually large by European standards with between eight and twelve dedicated members of staff and three to four support staff. Considerable administration, archives and research resources are also at the disposal of the whole Parliament when needed.", "title": "Proceedings" }, { "paragraph_id": 69, "text": "Delegations of the Parliament are formed in a similar manner and are responsible for relations with Parliaments outside the EU. There are 34 delegations made up of around 15 MEPs, chairpersons of the delegations also cooperate in a conference like the committee chairs do. They include \"Interparliamentary delegations\" (maintain relations with Parliament outside the EU), \"joint parliamentary committees\" (maintaining relations with parliaments of states which are candidates or associates of the EU), the delegation to the ACP EU Joint Parliamentary Assembly and the delegation to the Euro-Mediterranean Parliamentary Assembly. MEPs also participate in other international activities such as the Euro-Latin American Parliamentary Assembly, the Transatlantic Legislators' Dialogue and through election observation in third countries.", "title": "Proceedings" }, { "paragraph_id": 70, "text": "The Intergroups in the European Parliament are informal fora which gather MEPs from various political groups around any topic. They do not express the view of the European Parliament. They serve a double purpose: to address a topic which is transversal to several committees and in a less formal manner. Their daily secretariat can be run either through the office of MEPs or through interest groups, be them corporate lobbies or NGOs. The favored access to MEPs which the organization running the secretariat enjoys can be one explanation to the multiplication of Intergroups in the 1990s. They are now strictly regulated and financial support, direct or otherwise (via Secretariat staff, for example) must be officially specified in a declaration of financial interests. Also Intergroups are established or renewed at the beginning of each legislature through a specific process. Indeed, the proposal for the constitution or renewal of an Intergroup must be supported by at least 3 political groups whose support is limited to a specific number of proposals in proportion to their size (for example, for the legislature 2014–2019, the EPP or S&D political groups could support 22 proposals whereas the Greens/EFA or the EFDD political groups only 7).", "title": "Proceedings" }, { "paragraph_id": 71, "text": "Speakers in the European Parliament are entitled to speak in any of the 24 official languages of the European Union, ranging from French and German to Maltese and Irish. Simultaneous interpreting is offered in all plenary sessions, and all final texts of legislation are translated. With twenty-four languages, the European Parliament is the most multilingual parliament in the world and the biggest employer of interpreters in the world (employing 350 full-time and 400 freelancers when there is higher demand). Citizens may also address the Parliament in Basque, Catalan/Valencian and Galician.", "title": "Proceedings" }, { "paragraph_id": 72, "text": "Usually a language is translated from a foreign tongue into a translator's native tongue. Due to the large number of languages, some being minor ones, since 1995 interpreting is sometimes done the opposite way, out of an interpreter's native tongue (the \"retour\" system). In addition, a speech in a minor language may be interpreted through a third language for lack of interpreters (\"relay\" interpreting) – for example, when interpreting out of Estonian into Maltese. Due to the complexity of the issues, interpretation is not word for word. Instead, interpreters have to convey the political meaning of a speech, regardless of their own views. This requires detailed understanding of the politics and terms of the Parliament, involving a great deal of preparation beforehand (e.g. reading the documents in question). Difficulty can often arise when MEPs use profanities, jokes and word play or speak too fast.", "title": "Proceedings" }, { "paragraph_id": 73, "text": "While some see speaking their native language as an important part of their identity, and can speak more fluently in debates, interpretation and its cost has been criticised by some. A 2006 report by Alexander Stubb MEP highlighted that by only using English, French and German costs could be reduced from €118,000 per day (for 21 languages then – Romanian, Bulgarian and Croatian having not yet been included) to €8,900 per day. There has also been a small-scale campaign to make French the reference language for all legal texts, on the basis of an argument that it is more clear and precise for legal purposes.", "title": "Proceedings" }, { "paragraph_id": 74, "text": "Because the proceedings are translated into all of the official EU languages, they have been used to make a multilingual corpus known as Europarl. It is widely used to train statistical machine translation systems.", "title": "Proceedings" }, { "paragraph_id": 75, "text": "On 12 December 2022, President Metsola announced that all work with Qatar would be suspended.", "title": "Corruption scandal" }, { "paragraph_id": 76, "text": "A European Union correspondent, Jack Parrock confirmed on the basis of close sources to Qatar that the UAE was involved in plotting the corruption scandal. Parrock said the official investigations, leaked documents and a number of European sources have confirmed the Emirati involvement in planning the alleged bribery case against Qatar. In a separate report by The European Microscope, documents revealed that the UAE made extensive efforts to woo multiple members of the European Parliament. Abu Dhabi intensified the lobbying campaign to build its support within the European Parliament and to push its members to speak in favour of the Emirates. After Qatari officials, an Italian web publisher, Dagospia alleged that the UAE's plan against Qatar involved Tahnoun bin Zayed. It is alleged that the brother of UAE President Mohamed bin Zayed Al Nahyan gave Belgium the tips that lead to the investigations against Qatar.", "title": "Corruption scandal" }, { "paragraph_id": 77, "text": "At the same meeting the Greens–European Free Alliance and Renew Europe both called for an inquiry committee to be set up by the European Parliament. The suspension of Parliamentary business at this time is significant as it comes just three days before the Parliament was due to vote on introducing a visa-free travel agreement with Qatar and other countries. This resulted in the vote on visa-free travel to Ecuador, Kuwait, and Oman also being canceled. In addition, a major and controversial air transit agreement that would have allowed Qatar Airways unlimited access to the EU market was put on hold after warning that Qatar may have interfered in Parliament's internal deliberations on the agreement. During the negotiations there was criticism by EU member states that the agreement, negotiated by the Parliament's transport committee, was unduly favourable of Qatar.", "title": "Corruption scandal" }, { "paragraph_id": 78, "text": "On 16 December 2022, an article published by Politico, elucidate the connection between Antonio Panzeri and Abderrahim Atmoun. Pier Antonio Panzeri a former Italian member of the European parliament who headed the assembly of Maghreb delegation and Abderrahim Atmoun, his co-president of EU-Morocco joint parliamentary committee attended an award ceremony in 2014 where they were awarded by the king Mohammed VI of Morocco. Atmoun, now Morocco's ambassador to Warsaw posted some pictures from the ceremony with the king and also posted the series of pictures showcasing the long-term association between him and Panzeri – a man whom he publicly claims as his friend from as long as 2011. The later pictures also involves Francesco Giorgi where three of them can be seen sitting together at the meeting room. Later in 2022, the three men caught in the biggest corruption scandal as Belgium launches an investigation into whether Qatar and Morocco bought the influence in the European parliament. Panzeri and Giorgi, along with his partner Eva kaili are in jail facing preliminary charges of corruption. Also warrants were issued against Panzeri's wife and daughter in connection with influence buying which mentions the Atmoun giving gifts to them. The Lawyers have declined to comment and Morocco's embassies in Warsaw and Brussel are not responding to calls. Panzeri's wife and daughter also denied any wrongdoings.", "title": "Corruption scandal" }, { "paragraph_id": 79, "text": "President of the European Parliament Roberta Metsola issued a statement in January 2023 stating that she had moved to remove parliamentary immunity from two MEPs implicated in the ongoing corruption scandal after receiving a request from the Belgian police.", "title": "Corruption scandal" }, { "paragraph_id": 80, "text": "The European Parliament has had criticism over its prodigality and for being too complacent with conflicts of interest. Its refusal to become full member of the GRECO like all its member states is also a matter of criticism.", "title": "Corruption scandal" }, { "paragraph_id": 81, "text": "According to the European Parliament website, the annual parliament budget for 2021 was €2.064 billion, which corresponds to 1.2% of EU budget. The main cost categories were:", "title": "Annual costs" }, { "paragraph_id": 82, "text": "According to a European Parliament study prepared in 2013, the Strasbourg seat costs an extra €103 million over maintaining a single location and according to the Court of Auditors an additional €5 million is related to travel expenses caused by having two seats.", "title": "Annual costs" }, { "paragraph_id": 83, "text": "As a comparison, the German lower house of parliament (Bundestag) is estimated to cost €517 million in total for 2018, for a parliament with 709 members. The British House of Commons reported total annual costs in 2016-2017 of £249 million (€279 million). It had 650 seats.", "title": "Annual costs" }, { "paragraph_id": 84, "text": "According to The Economist, the European Parliament costs more than the British, French and German parliaments combined. A quarter of the costs is estimated to be related to translation and interpretation costs (c. €460 million) and the double seats are estimated to add an additional €180 million a year. For a like-for-like comparison, these two cost blocks can be excluded.", "title": "Annual costs" }, { "paragraph_id": 85, "text": "On 2 July 2018, MEPs rejected proposals to tighten the rules around the General Expenditure Allowance (GEA), which \"is a controversial €4,416 per month payment that MEPs are given to cover office and other expenses, but they are not required to provide any evidence of how the money is spent\".", "title": "Annual costs" }, { "paragraph_id": 86, "text": "The Parliament is based in three different cities with numerous buildings. A protocol attached to the Treaty of Amsterdam requires that 12 plenary sessions be held in Strasbourg (none in August but two in October), which is the Parliament's official seat, while extra part sessions as well as committee meetings are held in Brussels. Luxembourg City hosts the Secretariat of the European Parliament. The European Parliament is one of at least two assemblies in the world with more than one meeting place (another being the parliament of the Isle of Man, Tynwald) and one of the few that does not have the power to decide its own location.", "title": "Seat" }, { "paragraph_id": 87, "text": "The Strasbourg seat is seen as a symbol of reconciliation between France and Germany, the Strasbourg region having been fought over by the two countries in the past. However, the cost and inconvenience of having two seats is questioned. While Strasbourg is the official seat, and sits alongside the Council of Europe, Brussels is home to nearly all other major EU institutions, with the majority of Parliament's work being carried out there. Critics have described the two-seat arrangement as a \"travelling circus\", and there is a strong movement to establish Brussels as the sole seat. This is because the other political institutions (the commission, Council and European Council) are located there, and hence Brussels is treated as the 'capital' of the EU. This movement has received strong backing from numerous figures, including Margot Wallström, Commission First-Vice President from 2004 to 2010, who stated that \"something that was once a very positive symbol of the EU reuniting France and Germany has now become a negative symbol – of wasting money, bureaucracy and the insanity of the Brussels institutions\". The Green Party has also noted the environmental cost in a study led by Jean Lambert MEP and Caroline Lucas MEP; in addition to the extra 200 million euro spent on the extra seat, there are over 20,268 tonnes of additional carbon dioxide, undermining any environmental stance of the institution and the Union. The campaign is further backed by a million-strong online petition started by Cecilia Malmström MEP. In August 2014, an assessment by the European Court of Auditors calculated that relocating the Strasbourg seat of the European Parliament to Brussels would save €113.8 million per year. In 2006, there were allegations of irregularities in the charges made by the city of Strasbourg on buildings the Parliament rented, thus further harming the case for the Strasbourg seat.", "title": "Seat" }, { "paragraph_id": 88, "text": "Most MEPs prefer Brussels as a single base. A poll of MEPs found 89% of the respondents wanting a single seat, and 81% preferring Brussels. Another survey found 68% support. In July 2011, an absolute majority of MEPs voted in favour of a single seat. In early 2011, the Parliament voted to scrap one of the Strasbourg sessions by holding two within a single week. The mayor of Strasbourg officially reacted by stating \"we will counter-attack by upturning the adversary's strength to our own profit, as a judoka would do\". However, as Parliament's seat is now fixed by the treaties, it can only be changed by the Council acting unanimously, meaning that France could veto any move. Former French President Nicolas Sarkozy stated that the Strasbourg seat is \"non-negotiable\" and that France has no intention of surrendering the only EU Institution on French soil. Given France's declared intention to veto any relocation to Brussels, some MEPs have advocated civil disobedience by refusing to take part in the monthly exodus to Strasbourg.", "title": "Seat" }, { "paragraph_id": 89, "text": "However, the main building in Brussels has been suffering for more than a decade from a state of degradation. Renovation or reconstruction works including an hemicycle were estimated to cost at least €500 million in 2017 with fear that the cost would be even higher and possibly escalate up to €1 billion, whereas the seat in Strasbourg already offers a fully-fledged hemicycle.", "title": "Seat" }, { "paragraph_id": 90, "text": "Over the last few years, European institutions have committed to promoting transparency, openness, and the availability of information about their work. In particular, transparency is regarded as pivotal to the action of European institutions and a general principle of EU law, to be applied to the activities of EU institutions in order to strengthen the Union's democratic foundation. The general principles of openness and transparency are reaffirmed in the articles 8 A, point 3 and 10.3 of the Treaty of Lisbon and the Maastricht Treaty respectively, stating that \"every citizen shall have the right to participate in the democratic life of the Union. Decisions shall be taken as openly and as closely as possible to the citizen\". Furthermore, both treaties acknowledge the value of dialogue between citizens, representative associations, civil society, and European institutions.", "title": "Channels of dialogue, information, and communication with European civil society" }, { "paragraph_id": 91, "text": "Article 17 of the Treaty on the Functioning of the European Union (TFEU) lays the juridical foundation for an open, transparent dialogue between European institutions and churches, religious associations, and non-confessional and philosophical organisations. In July 2014, in the beginning of the 8th term, then President of the European Parliament Martin Schulz tasked Antonio Tajani, then vice-president, with implementing the dialogue with the religious and confessional organisations included in article 17. In this framework, the European Parliament hosts high-level conferences on inter-religious dialogue, also with focus on current issues and in relation with parliamentary works.", "title": "Channels of dialogue, information, and communication with European civil society" }, { "paragraph_id": 92, "text": "The chair of European Parliament Mediator for International Parental Child Abduction was established in 1987 by initiative of British MEP Charles Henry Plumb, with the goal of helping minor children of international couples victim of parental abduction. The Mediator finds negotiated solutions in the higher interest of the minor when said minor is abducted by a parent following separation of the couple, regardless whether married or unmarried. Since its institution, the chair has been held by Mairead McGuinness (since 2014), Roberta Angelilli (2009–2014), Evelyne Gebhardt (2004–2009), Mary Banotti (1995–2004), and Marie-Claude Vayssade (1987–1994). The Mediator's main task is to assist parents in finding a solution in the minor's best interest through mediation, i.e. a form of controversy resolution alternative to lawsuit. The Mediator is activated by request of a citizen and, after evaluating the request, starts a mediation process aimed at reaching an agreement. Once subscribed by both parties and the Mediator, the agreement is official. The nature of the agreement is that of a private contract between parties. In defining the agreement, the European Parliament offers the parties the juridical support necessary to reach a sound, lawful agreement based on legality and equity. The agreement can be ratified by the competent national courts and can also lay the foundation for consensual separation or divorce.", "title": "Channels of dialogue, information, and communication with European civil society" }, { "paragraph_id": 93, "text": "The European Parliamentary Research Service (EPRS) is the European Parliament's in-house research department and think tank. It provides Members of the European Parliament – and, where appropriate, parliamentary committees – with independent, objective and authoritative analysis of, and research on, policy issues relating to the European Union, in order to assist them in their parliamentary work. It is also designed to increase Members' and EP committees' capacity to scrutinise and oversee the European Commission and other EU executive bodies.", "title": "European Parliamentary Research Service" }, { "paragraph_id": 94, "text": "EPRS aims to provide a comprehensive range of products and services, backed by specialist internal expertise and knowledge sources in all policy fields, so empowering Members and committees through knowledge and contributing to the Parliament's effectiveness and influence as an institution. In undertaking this work, the EPRS supports and promotes parliamentary outreach to the wider public, including dialogue with relevant stakeholders in the EU's system of multi-level governance. All EPRS publications are publicly available on the EP Think Tank platform.", "title": "European Parliamentary Research Service" }, { "paragraph_id": 95, "text": "The European Parliament periodically commissions opinion polls and studies on public opinion trends in Member States to survey perceptions and expectations of citizens about its work and the overall activities of the European Union. Topics include citizens' perception of the European Parliament's role, their knowledge of the institution, their sense of belonging in the European Union, opinions on European elections and European integration, identity, citizenship, political values, but also on current issues such as climate change, current economy and politics, etc. Eurobarometer analyses seek to provide an overall picture of national situations, regional specificities, socio-demographic cleavages, and historical trends.", "title": "Eurobarometer of the European Parliament" }, { "paragraph_id": 96, "text": "With the Sakharov Prize, created in 1988, the European Parliament supports human rights by awarding individuals that contribute to promoting human rights worldwide, thus raising awareness on human rights violations. Priorities include: protection of human rights and fundamental liberties, with particular focus on freedom of expression; protection of minority rights; compliance with international law; and development of democracy and authentic rule of law.", "title": "Prizes" }, { "paragraph_id": 97, "text": "The European Charlemagne Youth Prize seeks to encourage youth participation in the European integration process. It is awarded by the European Parliament and the Foundation of the International Charlemagne Prize of Aachen to youth projects aimed at nurturing common European identity and European citizenship.", "title": "Prizes" }, { "paragraph_id": 98, "text": "The European Citizens' Prize is awarded by the European Parliament to activities and actions carried out by citizens and associations to promote integration between the citizens of EU member states and transnational cooperation projects in the EU.", "title": "Prizes" }, { "paragraph_id": 99, "text": "Since 2007, the LUX Prize is awarded by the European Parliament to films dealing with current topics of public European interest that encourage reflection on Europe and its future. Over time, the Lux Prize has become a prestigious cinema award which supports European film and production also outside the EU.", "title": "Prizes" }, { "paragraph_id": 100, "text": "From 2021, the Daphne Caruana Galizia Journalism prize shall be awarded by the European Parliament to outstanding journalism that reflect EU values. The prize consists in an award of 20,000 euros and the very first winner will be revealed in October 2021. This award is named after the late Maltese journalist, Daphne Caruana Galizia who was assassinated in Malta on 16 October 2017. In 2021 the prize was awarded to the Pegasus Project.", "title": "Prizes" } ]
The European Parliament (EP) is one of the legislative bodies of the European Union and one of its seven institutions. Together with the Council of the European Union, it adopts European legislation, following a proposal by the European Commission. The Parliament is composed of 705 members (MEPs). It represents the second-largest democratic electorate in the world, with an electorate of 375 million eligible voters in 2009. Since 1979, the Parliament has been directly elected every five years by the citizens of the European Union through universal suffrage. Voter turnout in parliamentary elections decreased each time after 1979 until 2019, when voter turnout increased by eight percentage points, and rose above 50% for the first time since 1994. The voting age is 18 in all EU member states except for Malta, Austria and Germany, where it is 16, and Greece, where it is 17. Belgian citizens can request to vote from the age of 16 as well. Although the European Parliament has legislative power, as does the Council, it does not formally possess the right of initiative as most national parliaments of the member states do, with the right of initiative being solely a prerogative of the European Commission. The Parliament is the "first institution" of the European Union, and shares equal legislative and budgetary powers with the Council. It likewise has equal control over the EU budget. Ultimately, the European Commission, which serves as the executive branch of the EU, is accountable to Parliament. In particular, Parliament can decide whether or not to approve the European Council's nominee for President of the Commission, and is further tasked with approving the appointment of the commission as a whole. It can subsequently force the current Commission to resign by adopting a motion of censure. The president of the European Parliament is the body's speaker and presides over the multi-party chamber. The five largest political groups are the European People's Party Group (EPP), the Progressive Alliance of Socialists and Democrats (S&D), Renew Europe, the Greens/European Free Alliance (Greens/EFA) and Identity and Democracy (ID). The last EU-wide election was held in 2019. The Parliament's headquarters are in Strasbourg, France, and has its administrative offices in Luxembourg City. Plenary sessions are "normally held in Strasbourg for four days a month, but sometimes there are additional sessions in Brussels", while the Parliament's committee meetings are held primarily in Brussels, Belgium.
2001-10-17T09:17:11Z
2023-12-30T16:17:38Z
[ "Template:Snd", "Template:Cite web", "Template:Authority control", "Template:National apportionment of MEPs", "Template:As of", "Template:Multiple image", "Template:Reflist", "Template:Parliaments in Europe", "Template:European Union topics", "Template:For-multi", "Template:Blockquote", "Template:Cite news", "Template:Refbegin", "Template:Short description", "Template:Infobox legislature", "Template:Politics of the European Union", "Template:Main", "Template:Official website", "Template:Use dmy dates", "Template:EP election results graph (percentage)", "Template:Cite book", "Template:Cite journal", "Template:Cite tweet", "Template:Portal bar", "Template:Further", "Template:Webarchive", "Template:Refend", "Template:Wikisourcecat", "Template:See also", "Template:R", "Template:Wikiquote", "Template:Commons category", "Template:European Parliament", "Template:Orders, decorations, and medals of the European Union", "Template:Clarify", "Template:Cite report" ]
https://en.wikipedia.org/wiki/European_Parliament
9,582
European Council
The European Council (informally EUCO) is a collegiate body (directorial system) that defines the overall political direction and priorities of the European Union. The European Council is part of the executive of the European Union (EU), beside the European Commission. It is composed of the heads of state or government of the EU member states, the President of the European Council, and the President of the European Commission. The High Representative of the Union for Foreign Affairs and Security Policy also takes part in its meetings. Established as an informal summit in 1975, the European Council was formalised as an institution in 2009 upon the commencement of the Treaty of Lisbon. Its current president is Charles Michel, former Prime Minister of Belgium. While the European Council has no legislative power, it is a strategic (and crisis-solving) body that provides the union with general political directions and priorities, and acts as a collective presidency. The European Commission remains the sole initiator of legislation, but the European Council provides a guide to legislative policy. The meetings of the European Council, still commonly referred to as EU summits, are chaired by its president and take place at least twice every six months; usually in the Europa building in Brussels. Decisions of the European Council are taken by consensus, except where the Treaties provide otherwise. The European Council officially gained the status of an EU institution after the Treaty of Lisbon in 2007, distinct from the Council of the European Union (Council of Ministers). Before that, the first summits of EU heads of state or government were held in February and July 1961 (in Paris and Bonn respectively). They were informal summits of the leaders of the European Community, and were started due to then-French President Charles de Gaulle's resentment at the domination of supranational institutions (notably the European Commission) over the integration process, but petered out. The first influential summit held, after the departure of de Gaulle, was the Hague summit of 1969, which reached an agreement on the admittance of the United Kingdom into the Community and initiated foreign policy cooperation (the European Political Cooperation) taking integration beyond economics. The summits were only formalised in the period between 1974 and 1988. At the December summit in Paris in 1974, following a proposal from then-French president Valéry Giscard d'Estaing, it was agreed that more high level, political input was needed following the "empty chair crisis" and economic problems. The inaugural European Council, as it became known, was held in Dublin on 10 and 11 March 1975 during Ireland's first Presidency of the Council of Ministers. In 1987, it was included in the treaties for the first time (the Single European Act) and had a defined role for the first time in the Maastricht Treaty. At first only a minimum of two meetings per year were required, which resulted in an average of three meetings per year being held for the 1975–1995 period. Since 1996, the number of meetings were required to be minimum four per year. For the latest 2008–2014 period, this minimum was well exceeded, by an average of seven meetings being held per year. The seat of the Council was formalised in 2002, basing it in Brussels. Three types of European Councils exist: Informal, Scheduled and Extraordinary. While the informal meetings are also scheduled 1½ years in advance, they differ from the scheduled ordinary meetings by not ending with official Council conclusions, as they instead end by more broad political Statements on some cherry picked policy matters. The extraordinary meetings always end with official Council conclusions—but differs from the scheduled meetings by not being scheduled more than a year in advance, as for example in 2001 when the European Council gathered to lead the European Union's response to the 11 September attacks. Some meetings of the European Council—and, before the European Council was formalised, meetings of the heads of government—are seen by some as turning points in the history of the European Union. For example: As such, the European Council had already existed before it gained the status as an institution of the European Union with the entering into force of the Treaty of Lisbon, but even after it had been mentioned in the treaties (since the Single European Act) it could only take political decisions, not formal legal acts. However, when necessary, the Heads of State or Government could also meet as the Council of Ministers and take formal decisions in that role. Sometimes, this was even compulsory, e.g. Article 214(2) of the Treaty establishing the European Community provided (before it was amended by the Treaty of Lisbon) that ‘the Council, meeting in the composition of Heads of State or Government and acting by a qualified majority, shall nominate the person it intends to appoint as President of the Commission’ (emphasis added); the same rule applied in some monetary policy provisions introduced by the Maastricht Treaty (e.g. Article 109j TEC). In that case, what was politically part of a European Council meeting was legally a meeting of the Council of Ministers. When the European Council, already introduced into the treaties by the Single European Act, became an institution by virtue of the Treaty of Lisbon, this was no longer necessary, and the "Council [of the European Union] meeting in the composition of the Heads of State or Government", was replaced in these instances by the European Council now taking formal legally binding decisions in these cases (Article 15 of the Treaty on European Union). The Treaty of Lisbon made the European Council a formal institution distinct from the (ordinary) Council of the EU, and created the present longer term and full-time presidency. As an outgrowth of the Council of the EU, the European Council had previously followed the same Presidency, rotating between each member state. While the Council of the EU retains that system, the European Council established, with no change in powers, a system of appointing an individual (without them being a national leader) for a two-and-a-half-year term—which can be renewed for the same person only once. Following the ratification of the treaty in December 2009, the European Council elected the then-Prime Minister of Belgium Herman Van Rompuy as its first permanent president (resigning from Belgian Prime Minister). The European Council is an official institution of the EU, described in the Lisbon Treaty as a body which "shall provide the Union with the necessary impetus for its development". Essentially it defines the EU's policy agenda and has thus been considered to be the motor of European integration. Beyond the need to provide "impetus", the council has developed further roles: to "settle issues outstanding from discussions at a lower level", to lead in foreign policy — acting externally as a "collective Head of State", "formal ratification of important documents" and "involvement in the negotiation of the treaty changes". Since the institution is composed of national leaders, it gathers the executive power of the member states and has thus a great influence in high-profile policy areas as for example foreign policy. It also exercises powers of appointment, such as appointment of its own President, the High Representative of the Union for Foreign Affairs and Security Policy, and the President of the European Central Bank. It proposes, to the European Parliament, a candidate for President of the European Commission. Moreover, the European Council influences police and justice planning, the composition of the commission, matters relating to the organisation of the rotating Council presidency, the suspension of membership rights, and changing the voting systems through the Passerelle Clause. Although the European Council has no direct legislative power, under the "emergency brake" procedure, a state outvoted in the Council of Ministers may refer contentious legislation to the European Council. However, the state may still be outvoted in the European Council. Hence with powers over the supranational executive of the EU, in addition to its other powers, the European Council has been described by some as the Union's "supreme political authority". The European Council consists of the heads of state or government of the member states, alongside its own President and the Commission President (both non-voting). The meetings used to be regularly attended by the national foreign minister as well, and the Commission President likewise accompanied by another member of the commission. However, since the Treaty of Lisbon, this has been discontinued, as the size of the body had become somewhat large following successive accessions of new Member States to the Union. Meetings can also include other invitees, such as the President of the European Central Bank, as required. The Secretary-General of the Council attends, and is responsible for organisational matters, including minutes. The President of the European Parliament also attends to give an opening speech outlining the European Parliament's position before talks begin. Additionally, the negotiations involve a large number of other people working behind the scenes. Most of those people, however, are not allowed to the conference room, except for two delegates per state to relay messages. At the push of a button members can also call for advice from a Permanent Representative via the "Antici Group" in an adjacent room. The group is composed of diplomats and assistants who convey information and requests. Interpreters are also required for meetings as members are permitted to speak in their own languages. As the composition is not precisely defined, some states which have a considerable division of executive power can find it difficult to decide who should attend the meetings. While an MEP, Alexander Stubb argued that there was no need for the President of Finland to attend Council meetings with or instead of the Prime Minister of Finland (who was head of European foreign policy). In 2008, having become Finnish Foreign Minister, Stubb was forced out of the Finnish delegation to the emergency council meeting on the Georgian crisis because the President wanted to attend the high-profile summit as well as the Prime Minister (only two people from each country could attend the meetings). This was despite Stubb being Chair-in-Office of the Organisation for Security and Co-operation in Europe at the time which was heavily involved in the crisis. Problems also occurred in Poland where the President of Poland and the Prime Minister of Poland were of different parties and had a different foreign policy response to the crisis. A similar situation arose in Romania between President Traian Băsescu and Prime Minister Călin Popescu-Tăriceanu in 2007–2008 and again in 2012 with Prime Minister Victor Ponta, who both opposed the president. A number of ad hoc meetings of heads of state or government of the member states of the euro area were held in 2010 and 2011 to discuss the Sovereign Debt crisis. It was agreed in October 2011 that they should meet regularly twice a year (with extra meetings if needed). This will normally be at the end of a European Council meeting and according to the same format (chaired by the President of the European Council and including the President of the Commission), but usually restricted to the (currently 20) heads of state or government of the member states of the eurozone. The President of the European Council is elected by the European Council by a qualified majority for a once-renewable term of two and a half years. The President must report to the European Parliament after each European Council meeting. The post was created by the Treaty of Lisbon and was subject to a debate over its exact role. Prior to Lisbon, the Presidency rotated in accordance with the Presidency of the Council of the European Union. The role of that President-in-Office was in no sense (other than protocol) equivalent to an office of a head of state, merely a primus inter pares (first among equals) role among other European heads of government. The President-in-Office was primarily responsible for preparing and chairing the Council meetings, and had no executive powers other than the task of representing the Union externally. Now the leader of the Council Presidency country can still act as president when the permanent president is absent. Almost all members of the European Council are members of a political party at national level, and most of these are also members of a political party at European level or other alliances such as Renew Europe. These frequently hold pre-meetings of their European Council members, prior to its meetings. However, the European Council is composed to represent the EU's states rather than political alliances and decisions are generally made on these lines, though ideological alignment can colour their political agreements and their choice of appointments (such as their president). The charts below outline the number of leaders affiliated to each alliance and their total voting weight. The map indicates the alignment of each individual country. The European Council is required by Article 15.3 TEU to meet at least twice every six months, but convenes more frequently in practice. Despite efforts to contain business, meetings typically last for at least two days, and run long into the night. Until 2002, the venue for European Council summits was the member state that held the rotating Presidency of the Council of the European Union. However, European leaders agreed during ratification of the Nice Treaty to forego this arrangement at such a time as the total membership of the European Union surpassed 18 member states. An advanced implementation of this agreement occurred in 2002, with certain states agreeing to waive their right to host meetings, favouring Brussels as the location. Following the growth of the EU to 25 member states, with the 2004 enlargement, all subsequent official summits of the European Council have been in Brussels, with the exception of punctuated ad hoc meetings, such as the 2017 informal European Council in Malta. The logistical, environmental, financial and security arrangements of hosting large summits are usually cited as the primary factors in the decision by EU leaders to move towards a permanent seat for the European Council. Additionally, some scholars argue that the move, when coupled with the formalisation of the European Council in the Lisbon Treaty, represents an institutionalisation of an ad hoc EU organ that had its origins in Luxembourg compromise, with national leaders reasserting their dominance as the EU's "supreme political authority". Originally, both the European Council and the Council of the European Union utilised the Justus Lipsius building as their Brussels venue. In order to make room for additional meeting space a number of renovations were made, including the conversion of an underground carpark into additional press briefing rooms. However, in 2004 leaders decided the logistical problems created by the outdated facilities warranted the construction of a new purpose built seat able to cope with the nearly 6,000 meetings, working groups, and summits per year. This resulted in the Europa building, which opened its doors in 2017. The focal point of the new building, the distinctive multi-storey "lantern-shaped" structure in which the main meeting room is located, is utilised in both the European Council's and Council of the European Union's official logos. The EU command and control (C2) structure is directed by political bodies composed of member states' representatives, and generally requires unanimous decisions. As of April 2019:
[ { "paragraph_id": 0, "text": "The European Council (informally EUCO) is a collegiate body (directorial system) that defines the overall political direction and priorities of the European Union. The European Council is part of the executive of the European Union (EU), beside the European Commission. It is composed of the heads of state or government of the EU member states, the President of the European Council, and the President of the European Commission. The High Representative of the Union for Foreign Affairs and Security Policy also takes part in its meetings.", "title": "" }, { "paragraph_id": 1, "text": "Established as an informal summit in 1975, the European Council was formalised as an institution in 2009 upon the commencement of the Treaty of Lisbon. Its current president is Charles Michel, former Prime Minister of Belgium.", "title": "" }, { "paragraph_id": 2, "text": "While the European Council has no legislative power, it is a strategic (and crisis-solving) body that provides the union with general political directions and priorities, and acts as a collective presidency. The European Commission remains the sole initiator of legislation, but the European Council provides a guide to legislative policy.", "title": "Scope" }, { "paragraph_id": 3, "text": "The meetings of the European Council, still commonly referred to as EU summits, are chaired by its president and take place at least twice every six months; usually in the Europa building in Brussels. Decisions of the European Council are taken by consensus, except where the Treaties provide otherwise.", "title": "Scope" }, { "paragraph_id": 4, "text": "The European Council officially gained the status of an EU institution after the Treaty of Lisbon in 2007, distinct from the Council of the European Union (Council of Ministers). Before that, the first summits of EU heads of state or government were held in February and July 1961 (in Paris and Bonn respectively). They were informal summits of the leaders of the European Community, and were started due to then-French President Charles de Gaulle's resentment at the domination of supranational institutions (notably the European Commission) over the integration process, but petered out. The first influential summit held, after the departure of de Gaulle, was the Hague summit of 1969, which reached an agreement on the admittance of the United Kingdom into the Community and initiated foreign policy cooperation (the European Political Cooperation) taking integration beyond economics.", "title": "History" }, { "paragraph_id": 5, "text": "The summits were only formalised in the period between 1974 and 1988. At the December summit in Paris in 1974, following a proposal from then-French president Valéry Giscard d'Estaing, it was agreed that more high level, political input was needed following the \"empty chair crisis\" and economic problems. The inaugural European Council, as it became known, was held in Dublin on 10 and 11 March 1975 during Ireland's first Presidency of the Council of Ministers. In 1987, it was included in the treaties for the first time (the Single European Act) and had a defined role for the first time in the Maastricht Treaty. At first only a minimum of two meetings per year were required, which resulted in an average of three meetings per year being held for the 1975–1995 period. Since 1996, the number of meetings were required to be minimum four per year. For the latest 2008–2014 period, this minimum was well exceeded, by an average of seven meetings being held per year. The seat of the Council was formalised in 2002, basing it in Brussels. Three types of European Councils exist: Informal, Scheduled and Extraordinary. While the informal meetings are also scheduled 1½ years in advance, they differ from the scheduled ordinary meetings by not ending with official Council conclusions, as they instead end by more broad political Statements on some cherry picked policy matters. The extraordinary meetings always end with official Council conclusions—but differs from the scheduled meetings by not being scheduled more than a year in advance, as for example in 2001 when the European Council gathered to lead the European Union's response to the 11 September attacks.", "title": "History" }, { "paragraph_id": 6, "text": "Some meetings of the European Council—and, before the European Council was formalised, meetings of the heads of government—are seen by some as turning points in the history of the European Union. For example:", "title": "History" }, { "paragraph_id": 7, "text": "As such, the European Council had already existed before it gained the status as an institution of the European Union with the entering into force of the Treaty of Lisbon, but even after it had been mentioned in the treaties (since the Single European Act) it could only take political decisions, not formal legal acts. However, when necessary, the Heads of State or Government could also meet as the Council of Ministers and take formal decisions in that role. Sometimes, this was even compulsory, e.g. Article 214(2) of the Treaty establishing the European Community provided (before it was amended by the Treaty of Lisbon) that ‘the Council, meeting in the composition of Heads of State or Government and acting by a qualified majority, shall nominate the person it intends to appoint as President of the Commission’ (emphasis added); the same rule applied in some monetary policy provisions introduced by the Maastricht Treaty (e.g. Article 109j TEC). In that case, what was politically part of a European Council meeting was legally a meeting of the Council of Ministers. When the European Council, already introduced into the treaties by the Single European Act, became an institution by virtue of the Treaty of Lisbon, this was no longer necessary, and the \"Council [of the European Union] meeting in the composition of the Heads of State or Government\", was replaced in these instances by the European Council now taking formal legally binding decisions in these cases (Article 15 of the Treaty on European Union).", "title": "History" }, { "paragraph_id": 8, "text": "The Treaty of Lisbon made the European Council a formal institution distinct from the (ordinary) Council of the EU, and created the present longer term and full-time presidency. As an outgrowth of the Council of the EU, the European Council had previously followed the same Presidency, rotating between each member state. While the Council of the EU retains that system, the European Council established, with no change in powers, a system of appointing an individual (without them being a national leader) for a two-and-a-half-year term—which can be renewed for the same person only once. Following the ratification of the treaty in December 2009, the European Council elected the then-Prime Minister of Belgium Herman Van Rompuy as its first permanent president (resigning from Belgian Prime Minister).", "title": "History" }, { "paragraph_id": 9, "text": "The European Council is an official institution of the EU, described in the Lisbon Treaty as a body which \"shall provide the Union with the necessary impetus for its development\". Essentially it defines the EU's policy agenda and has thus been considered to be the motor of European integration. Beyond the need to provide \"impetus\", the council has developed further roles: to \"settle issues outstanding from discussions at a lower level\", to lead in foreign policy — acting externally as a \"collective Head of State\", \"formal ratification of important documents\" and \"involvement in the negotiation of the treaty changes\".", "title": "Powers and functions" }, { "paragraph_id": 10, "text": "Since the institution is composed of national leaders, it gathers the executive power of the member states and has thus a great influence in high-profile policy areas as for example foreign policy. It also exercises powers of appointment, such as appointment of its own President, the High Representative of the Union for Foreign Affairs and Security Policy, and the President of the European Central Bank. It proposes, to the European Parliament, a candidate for President of the European Commission. Moreover, the European Council influences police and justice planning, the composition of the commission, matters relating to the organisation of the rotating Council presidency, the suspension of membership rights, and changing the voting systems through the Passerelle Clause. Although the European Council has no direct legislative power, under the \"emergency brake\" procedure, a state outvoted in the Council of Ministers may refer contentious legislation to the European Council. However, the state may still be outvoted in the European Council. Hence with powers over the supranational executive of the EU, in addition to its other powers, the European Council has been described by some as the Union's \"supreme political authority\".", "title": "Powers and functions" }, { "paragraph_id": 11, "text": "The European Council consists of the heads of state or government of the member states, alongside its own President and the Commission President (both non-voting). The meetings used to be regularly attended by the national foreign minister as well, and the Commission President likewise accompanied by another member of the commission. However, since the Treaty of Lisbon, this has been discontinued, as the size of the body had become somewhat large following successive accessions of new Member States to the Union. Meetings can also include other invitees, such as the President of the European Central Bank, as required. The Secretary-General of the Council attends, and is responsible for organisational matters, including minutes. The President of the European Parliament also attends to give an opening speech outlining the European Parliament's position before talks begin.", "title": "Composition" }, { "paragraph_id": 12, "text": "Additionally, the negotiations involve a large number of other people working behind the scenes. Most of those people, however, are not allowed to the conference room, except for two delegates per state to relay messages. At the push of a button members can also call for advice from a Permanent Representative via the \"Antici Group\" in an adjacent room. The group is composed of diplomats and assistants who convey information and requests. Interpreters are also required for meetings as members are permitted to speak in their own languages.", "title": "Composition" }, { "paragraph_id": 13, "text": "As the composition is not precisely defined, some states which have a considerable division of executive power can find it difficult to decide who should attend the meetings. While an MEP, Alexander Stubb argued that there was no need for the President of Finland to attend Council meetings with or instead of the Prime Minister of Finland (who was head of European foreign policy). In 2008, having become Finnish Foreign Minister, Stubb was forced out of the Finnish delegation to the emergency council meeting on the Georgian crisis because the President wanted to attend the high-profile summit as well as the Prime Minister (only two people from each country could attend the meetings). This was despite Stubb being Chair-in-Office of the Organisation for Security and Co-operation in Europe at the time which was heavily involved in the crisis. Problems also occurred in Poland where the President of Poland and the Prime Minister of Poland were of different parties and had a different foreign policy response to the crisis. A similar situation arose in Romania between President Traian Băsescu and Prime Minister Călin Popescu-Tăriceanu in 2007–2008 and again in 2012 with Prime Minister Victor Ponta, who both opposed the president.", "title": "Composition" }, { "paragraph_id": 14, "text": "A number of ad hoc meetings of heads of state or government of the member states of the euro area were held in 2010 and 2011 to discuss the Sovereign Debt crisis. It was agreed in October 2011 that they should meet regularly twice a year (with extra meetings if needed). This will normally be at the end of a European Council meeting and according to the same format (chaired by the President of the European Council and including the President of the Commission), but usually restricted to the (currently 20) heads of state or government of the member states of the eurozone.", "title": "Composition" }, { "paragraph_id": 15, "text": "The President of the European Council is elected by the European Council by a qualified majority for a once-renewable term of two and a half years. The President must report to the European Parliament after each European Council meeting. The post was created by the Treaty of Lisbon and was subject to a debate over its exact role. Prior to Lisbon, the Presidency rotated in accordance with the Presidency of the Council of the European Union. The role of that President-in-Office was in no sense (other than protocol) equivalent to an office of a head of state, merely a primus inter pares (first among equals) role among other European heads of government. The President-in-Office was primarily responsible for preparing and chairing the Council meetings, and had no executive powers other than the task of representing the Union externally. Now the leader of the Council Presidency country can still act as president when the permanent president is absent.", "title": "Composition" }, { "paragraph_id": 16, "text": "Almost all members of the European Council are members of a political party at national level, and most of these are also members of a political party at European level or other alliances such as Renew Europe. These frequently hold pre-meetings of their European Council members, prior to its meetings. However, the European Council is composed to represent the EU's states rather than political alliances and decisions are generally made on these lines, though ideological alignment can colour their political agreements and their choice of appointments (such as their president).", "title": "Composition" }, { "paragraph_id": 17, "text": "The charts below outline the number of leaders affiliated to each alliance and their total voting weight. The map indicates the alignment of each individual country.", "title": "Composition" }, { "paragraph_id": 18, "text": "The European Council is required by Article 15.3 TEU to meet at least twice every six months, but convenes more frequently in practice. Despite efforts to contain business, meetings typically last for at least two days, and run long into the night.", "title": "Seat and meetings" }, { "paragraph_id": 19, "text": "Until 2002, the venue for European Council summits was the member state that held the rotating Presidency of the Council of the European Union. However, European leaders agreed during ratification of the Nice Treaty to forego this arrangement at such a time as the total membership of the European Union surpassed 18 member states. An advanced implementation of this agreement occurred in 2002, with certain states agreeing to waive their right to host meetings, favouring Brussels as the location. Following the growth of the EU to 25 member states, with the 2004 enlargement, all subsequent official summits of the European Council have been in Brussels, with the exception of punctuated ad hoc meetings, such as the 2017 informal European Council in Malta. The logistical, environmental, financial and security arrangements of hosting large summits are usually cited as the primary factors in the decision by EU leaders to move towards a permanent seat for the European Council. Additionally, some scholars argue that the move, when coupled with the formalisation of the European Council in the Lisbon Treaty, represents an institutionalisation of an ad hoc EU organ that had its origins in Luxembourg compromise, with national leaders reasserting their dominance as the EU's \"supreme political authority\".", "title": "Seat and meetings" }, { "paragraph_id": 20, "text": "Originally, both the European Council and the Council of the European Union utilised the Justus Lipsius building as their Brussels venue. In order to make room for additional meeting space a number of renovations were made, including the conversion of an underground carpark into additional press briefing rooms. However, in 2004 leaders decided the logistical problems created by the outdated facilities warranted the construction of a new purpose built seat able to cope with the nearly 6,000 meetings, working groups, and summits per year. This resulted in the Europa building, which opened its doors in 2017. The focal point of the new building, the distinctive multi-storey \"lantern-shaped\" structure in which the main meeting room is located, is utilised in both the European Council's and Council of the European Union's official logos.", "title": "Seat and meetings" }, { "paragraph_id": 21, "text": "The EU command and control (C2) structure is directed by political bodies composed of member states' representatives, and generally requires unanimous decisions. As of April 2019:", "title": "Role in security and defence" }, { "paragraph_id": 22, "text": "", "title": "Role in security and defence" } ]
The European Council is a collegiate body that defines the overall political direction and priorities of the European Union. The European Council is part of the executive of the European Union (EU), beside the European Commission. It is composed of the heads of state or government of the EU member states, the President of the European Council, and the President of the European Commission. The High Representative of the Union for Foreign Affairs and Security Policy also takes part in its meetings. Established as an informal summit in 1975, the European Council was formalised as an institution in 2009 upon the commencement of the Treaty of Lisbon. Its current president is Charles Michel, former Prime Minister of Belgium.
2001-10-25T23:23:34Z
2023-12-14T22:40:27Z
[ "Template:European Union topics", "Template:Use dmy dates", "Template:Cite web", "Template:Cite news", "Template:European Council", "Template:Portal bar", "Template:Use British English", "Template:Infobox organization", "Template:Bar box", "Template:Reflist", "Template:Oweb", "Template:EU politics", "Template:Authority control", "Template:Short description", "Template:Politics of the European Union", "Template:Cn", "Template:Main", "Template:European Council Members Timeline", "Template:European Union command and control structure", "Template:Cite book", "Template:Cite journal", "Template:Distinguish", "Template:Further", "Template:Citation needed", "Template:Members of the European Council" ]
https://en.wikipedia.org/wiki/European_Council
9,587
Euthanasia
Euthanasia (from Greek: εὐθανασία, lit. 'good death': εὖ, eu, 'well, good' + θάνατος, thanatos, 'death') is the practise of intentionally ending life to eliminate pain and suffering. Different countries have different euthanasia laws. The British House of Lords select committee on medical ethics defines euthanasia as "a deliberate intervention undertaken with the express intention of ending a life to relieve intractable suffering". In the Netherlands and Belgium, euthanasia is understood as "termination of life by a doctor at the request of a patient". The Dutch law, however, does not use the term 'euthanasia' but includes the concept under the broader definition of "assisted suicide and termination of life on request". Euthanasia is categorised in different ways, which include voluntary, non-voluntary, and involuntary. Voluntary euthanasia is when a person wishes to have their life ended and is legal in a growing number of countries. Non-voluntary euthanasia occurs when a patient's consent is unavailable and is legal in some countries under certain limited conditions, in both active and passive forms. Involuntary euthanasia, which is done without asking for consent or against the patient's will, is illegal in all countries and is usually considered murder. As of 2006, euthanasia had become the most active area of research in bioethics. In some countries, divisive public controversy occurs over the moral, ethical, and legal issues associated with euthanasia. Passive euthanasia (known as "pulling the plug") is legal under some circumstances in many countries. Active euthanasia, however, is legal or de facto legal in only a handful of countries (for example, Belgium, Canada, and Switzerland), which limit it to specific circumstances and require the approval of counsellors, doctors, or other specialists. In some countries—such as Nigeria, Saudi Arabia, and Pakistan—support for active euthanasia is almost nonexistent. Like other terms borrowed from history, "euthanasia" has had different meanings depending on usage. The first apparent usage of the term "euthanasia" belongs to the historian Suetonius, who described how the Emperor Augustus, "dying quickly and without suffering in the arms of his wife, Livia, experienced the 'euthanasia' he had wished for." The word "euthanasia" was first used in a medical context by Francis Bacon in the 17th century to refer to an easy, painless, happy death, during which it was a "physician's responsibility to alleviate the 'physical sufferings' of the body." Bacon referred to an "outward euthanasia"—the term "outward" he used to distinguish from a spiritual concept—the euthanasia "which regards the preparation of the soul." In current usage, euthanasia has been defined as the "painless inducement of a quick death". However, it is argued that this approach fails to properly define euthanasia, as it leaves open a number of possible actions that would meet the requirements of the definition but would not be seen as euthanasia. In particular, these include situations where a person kills another, painlessly, but for no reason beyond that of personal gain, or accidental deaths that are quick and painless but not intentional. Another approach incorporates the notion of suffering into the definition. The definition offered by the Oxford English Dictionary incorporates suffering as a necessary condition with "the painless killing of a patient suffering from an incurable and painful disease or in an irreversible coma", This approach is included in Marvin Khol and Paul Kurtz's definition of it as "a mode or act of inducing or permitting death painlessly as a relief from suffering". Counterexamples can be given: such definitions may encompass killing a person suffering from an incurable disease for personal gain (such as to claim an inheritance), and commentators such as Tom Beauchamp and Arnold Davidson have argued that doing so would constitute "murder simpliciter" rather than euthanasia. The third element incorporated into many definitions is that of intentionality: the death must be intended rather than accidental, and the intent of the action must be a "merciful death". Michael Wreen argued that "the principal thing that distinguishes euthanasia from intentional killing simpliciter is the agent's motive: it must be a good motive insofar as the good of the person killed is concerned." Similarly, Heather Draper speaks to the importance of motive, arguing that "the motive forms a crucial part of arguments for euthanasia, because it must be in the best interests of the person on the receiving end." Definitions such as those offered by the House of Lords Select committee on Medical Ethics take this path, where euthanasia is defined as "a deliberate intervention undertaken with the express intention of ending a life, to relieve intractable suffering." Beauchamp and Davidson also highlight Baruch Brody's "an act of euthanasia is one in which one person ... (A) kills another person (B) for the benefit of the second person, who actually does benefit from being killed". Draper argued that any definition of euthanasia must incorporate four elements: an agent and a subject; an intention; causal proximity, such that the actions of the agent lead to the outcome; and an outcome. Based on this, she offered a definition incorporating those elements, stating that euthanasia "must be defined as death that results from the intention of one person to kill another person, using the most gentle and painless means possible, that is motivated solely by the best interests of the person who dies." Prior to Draper, Beauchamp and Davidson had also offered a definition that included these elements. Their definition specifically discounts fetuses to distinguish between abortions and euthanasia: In summary, we have argued ... that the death of a human being, A, is an instance of euthanasia if and only if (1) A's death is intended by at least one other human being, B, where B is either the cause of death or a causally relevant feature of the event resulting in death (whether by action or by omission); (2) there is either sufficient current evidence for B to believe that A is acutely suffering or irreversibly comatose, or there is sufficient current evidence related to A's present condition such that one or more known causal laws supports B's belief that A will be in a condition of acute suffering or irreversible comatoseness; (3) (a) B's primary reason for intending A's death is cessation of A's (actual or predicted future) suffering or irreversible comatoseness, where B does not intend A's death for a different primary reason, though there may be other relevant reasons, and (b) there is sufficient current evidence for either A or B that causal means to A's death will not produce any more suffering than would be produced for A if B were not to intervene; (4) the causal means to the event of A's death are chosen by A or B to be as painless as possible, unless either A or B has an overriding reason for a more painful causal means, where the reason for choosing the latter causal means does not conflict with the evidence in 3b; (5) A is a nonfetal organism. Wreen, in part responding to Beauchamp and Davidson, offered a six-part definition: Person A committed an act of euthanasia if and only if (1) A killed B or let her die; (2) A intended to kill B; (3) the intention specified in (2) was at least partial cause of the action specified in (1); (4) the causal journey from the intention specified in (2) to the action specified in (1) is more or less in accordance with A's plan of action; (5) A's killing of B is a voluntary action; (6) the motive for the action specified in (1), the motive standing behind the intention specified in (2), is the good of the person killed. Wreen also considered a seventh requirement: "(7) The good specified in (6) is, or at least includes, the avoidance of evil", although, as Wreen noted in the paper, he was not convinced that the restriction was required. In discussing his definition, Wreen noted the difficulty of justifying euthanasia when faced with the notion of the subject's "right to life". In response, Wreen argued that euthanasia has to be voluntary and that "involuntary euthanasia is, as such, a great wrong". Other commentators incorporate consent more directly into their definitions. For example, in a discussion of euthanasia presented in 2003 by the European Association of Palliative Care (EPAC) Ethics Task Force, the authors offered: "Medicalized killing of a person without the person's consent, whether nonvoluntary (where the person is unable to consent) or involuntary (against the person's will), is not euthanasia: it is murder. Hence, euthanasia can be voluntary only." Although the EPAC Ethics Task Force argued that both non-voluntary and involuntary euthanasia could not be included in the definition of euthanasia, there is discussion in the literature about excluding one but not the other. Euthanasia may be classified into three types, according to whether a person gives informed consent: voluntary, non-voluntary and involuntary. There is a debate within the medical and bioethics literature about whether or not the non-voluntary (and by extension, involuntary) killing of patients can be regarded as euthanasia, irrespective of intent or the patient's circumstances. In the definitions offered by Beauchamp and Davidson and, later, by Wreen, consent on the part of the patient was not considered one of their criteria, although it may have been required to justify euthanasia. However, others see consent as essential. Voluntary euthanasia is conducted with the consent of the patient. Active voluntary euthanasia is legal in Belgium, Luxembourg and the Netherlands. Passive voluntary euthanasia is legal throughout the US per Cruzan v. Director, Missouri Department of Health. When the patient brings about their own death with the assistance of a physician, the term assisted suicide is often used instead. Assisted suicide is legal in Switzerland and the U.S. states of California, Oregon, Washington, Montana and Vermont. Non-voluntary euthanasia is conducted when the consent of the patient is unavailable. Examples include child euthanasia, which is illegal worldwide but decriminalised under certain specific circumstances in the Netherlands under the Groningen Protocol. Passive forms of non-voluntary euthanasia (i.e. withholding treatment) are legal in a number of countries under specified conditions. Involuntary euthanasia is conducted against the will of the patient. Voluntary, non-voluntary and involuntary types can be further divided into passive or active variants. Passive euthanasia entails the withholding treatment necessary for the continuance of life. Active euthanasia entails the use of lethal substances or forces (such as administering a lethal injection), and is more controversial. While some authors consider these terms to be misleading and unhelpful, they are nonetheless commonly used. In some cases, such as the administration of increasingly necessary, but toxic doses of painkillers, there is a debate whether or not to regard the practice as active or passive. Euthanasia was practiced in Ancient Greece and Rome: for example, hemlock was employed as a means of hastening death on the island of Kea, a technique also employed in Massalia. Euthanasia, in the sense of the deliberate hastening of a person's death, was supported by Socrates, Plato and Seneca the Elder in the ancient world, although Hippocrates appears to have spoken against the practice, writing "I will not prescribe a deadly drug to please someone, nor give advice that may cause his death" (noting there is some debate in the literature about whether or not this was intended to encompass euthanasia). The term euthanasia, in the earlier sense of supporting someone as they died, was used for the first time by Francis Bacon. In his work, Euthanasia medica, he chose this ancient Greek word and, in doing so, distinguished between euthanasia interior, the preparation of the soul for death, and euthanasia exterior, which was intended to make the end of life easier and painless, in exceptional circumstances by shortening life. That the ancient meaning of an easy death came to the fore again in the early modern period can be seen from its definition in the 18th century Zedlers Universallexikon: Euthanasia: a very gentle and quiet death, which happens without painful convulsions. The word comes from ευ, bene, well, and θανατος, mors, death. The concept of euthanasia in the sense of alleviating the process of death goes back to the medical historian Karl Friedrich Heinrich Marx, who drew on Bacon's philosophical ideas. According to Marx, a doctor had a moral duty to ease the suffering of death through encouragement, support and mitigation using medication. Such an "alleviation of death" reflected the contemporary zeitgeist, but was brought into the medical canon of responsibility for the first time by Marx. Marx also stressed the distinction of the theological care of the soul of sick people from the physical care and medical treatment by doctors. Euthanasia in its modern sense has always been strongly opposed in the Judeo-Christian tradition. Thomas Aquinas opposed both and argued that the practice of euthanasia contradicted our natural human instincts of survival, as did Francois Ranchin (1565–1641), a French physician and professor of medicine, and Michael Boudewijns (1601–1681), a physician and teacher. Other voices argued for euthanasia, such as John Donne in 1624, and euthanasia continued to be practised. In 1678, the publication of Caspar Questel's De pulvinari morientibus non-subtrahend, ("On the pillow of which the dying should not be deprived"), initiated debate on the topic. Questel described various customs which were employed at the time to hasten the death of the dying, (including the sudden removal of a pillow, which was believed to accelerate death), and argued against their use, as doing so was "against the laws of God and Nature". This view was shared by others who followed, including Philipp Jakob Spener, Veit Riedlin and Johann Georg Krünitz. Despite opposition, euthanasia continued to be practised, involving techniques such as bleeding, suffocation, and removing people from their beds to be placed on the cold ground. Suicide and euthanasia became more accepted during the Age of Enlightenment. Thomas More wrote of euthanasia in Utopia, although it is not clear if More was intending to endorse the practice. Other cultures have taken different approaches: for example, in Japan suicide has not traditionally been viewed as a sin, as it is used in cases of honor, and accordingly, the perceptions of euthanasia are different from those in other parts of the world. In the mid-1800s, the use of morphine to treat "the pains of death" emerged, with John Warren recommending its use in 1848. A similar use of chloroform was revealed by Joseph Bullar in 1866. However, in neither case was it recommended that the use should be to hasten death. In 1870 Samuel Williams, a schoolteacher, initiated the contemporary euthanasia debate through a speech given at the Birmingham Speculative Club in England, which was subsequently published in a one-off publication entitled Essays of the Birmingham Speculative Club, the collected works of a number of members of an amateur philosophical society. Williams' proposal was to use chloroform to deliberately hasten the death of terminally ill patients: That in all cases of hopeless and painful illness, it should be the recognized duty of the medical attendant, whenever so desired by the patient, to administer chloroform or such other anaesthetic as may by-and-bye supersede chloroform – so as to destroy consciousness at once, and put the sufferer to a quick and painless death; all needful precautions being adopted to prevent any possible abuse of such duty; and means being taken to establish, beyond the possibility of doubt or question, that the remedy was applied at the express wish of the patient. The essay was favourably reviewed in The Saturday Review, but an editorial against the essay appeared in The Spectator. From there it proved to be influential, and other writers came out in support of such views: Lionel Tollemache wrote in favour of euthanasia, as did Annie Besant, the essayist and reformer who later became involved with the National Secular Society, considering it a duty to society to "die voluntarily and painlessly" when one reaches the point of becoming a 'burden'. Popular Science analyzed the issue in May 1873, assessing both sides of the argument. Kemp notes that at the time, medical doctors did not participate in the discussion; it was "essentially a philosophical enterprise ... tied inextricably to a number of objections to the Christian doctrine of the sanctity of human life". The rise of the euthanasia movement in the United States coincided with the so-called Gilded Age, a time of social and technological change that encompassed an "individualistic conservatism that praised laissez-faire economics, scientific method, and rationalism", along with major depressions, industrialisation and conflict between corporations and labour unions. It was also the period in which the modern hospital system was developed, which has been seen as a factor in the emergence of the euthanasia debate. Robert Ingersoll argued for euthanasia, stating in 1894 that where someone is suffering from a terminal illness, such as terminal cancer, they should have a right to end their pain through suicide. Felix Adler offered a similar approach, although, unlike Ingersoll, Adler did not reject religion. In fact, he argued from an Ethical Culture framework. In 1891, Adler argued that those suffering from overwhelming pain should have the right to commit suicide, and, furthermore, that it should be permissible for a doctor to assist – thus making Adler the first "prominent American" to argue for suicide in cases where people were suffering from chronic illness. Both Ingersoll and Adler argued for voluntary euthanasia of adults suffering from terminal ailments. Dowbiggin argues that by breaking down prior moral objections to euthanasia and suicide, Ingersoll and Adler enabled others to stretch the definition of euthanasia. The first attempt to legalise euthanasia took place in the United States, when Henry Hunt introduced legislation into the General Assembly of Ohio in 1906. Hunt did so at the behest of Anna Sophina Hall, a wealthy heiress who was a major figure in the euthanasia movement during the early 20th century in the United States. Hall had watched her mother die after an extended battle with liver cancer, and had dedicated herself to ensuring that others would not have to endure the same suffering. Towards this end she engaged in an extensive letter writing campaign, recruited Lurana Sheldon and Maud Ballington Booth, and organised a debate on euthanasia at the annual meeting of the American Humane Association in 1905 – described by Jacob Appel as the first significant public debate on the topic in the 20th century. Hunt's bill called for the administration of an anesthetic to bring about a patient's death, so long as the person is of lawful age and sound mind, and was suffering from a fatal injury, an irrevocable illness, or great physical pain. It also required that the case be heard by a physician, required informed consent in front of three witnesses, and required the attendance of three physicians who had to agree that the patient's recovery was impossible. A motion to reject the bill outright was voted down, but the bill failed to pass, 79 to 23. Along with the Ohio euthanasia proposal, in 1906 Assemblyman Ross Gregory introduced a proposal to permit euthanasia to the Iowa legislature. However, the Iowa legislation was broader in scope than that offered in Ohio. It allowed for the death of any person of at least ten years of age who suffered from an ailment that would prove fatal and cause extreme pain, should they be of sound mind and express a desire to artificially hasten their death. In addition, it allowed for infants to be euthanised if they were sufficiently deformed, and permitted guardians to request euthanasia on behalf of their wards. The proposed legislation also imposed penalties on physicians who refused to perform euthanasia when requested: a 6–12-month prison term and a fine of between $200 and $1,000. The proposal proved to be controversial. It engendered considerable debate and failed to pass, having been withdrawn from consideration after being passed to the Committee on Public Health. After 1906 the euthanasia debate reduced in intensity, resurfacing periodically, but not returning to the same level of debate until the 1930s in the United Kingdom. Euthanasia opponent Ian Dowbiggin argues that the early membership of the Euthanasia Society of America (ESA) reflected how many perceived euthanasia at the time, often seeing it as a eugenics matter rather than an issue concerning individual rights. Dowbiggin argues that not every eugenist joined the ESA "solely for eugenic reasons", but he postulates that there were clear ideological connections between the eugenics and euthanasia movements. The Voluntary Euthanasia Legalisation Society was founded in 1935 by Charles Killick Millard (now called Dignity in Dying). The movement campaigned for the legalisation of euthanasia in Great Britain. In January 1936, King George V was given a fatal dose of morphine and cocaine to hasten his death. At the time he was suffering from cardio-respiratory failure, and the decision to end his life was made by his physician, Lord Dawson. Although this event was kept a secret for over 50 years, the death of George V coincided with proposed legislation in the House of Lords to legalise euthanasia. A 24 July 1939 killing of a severely disabled infant in Nazi Germany was described in a BBC "Genocide Under the Nazis Timeline" as the first "state-sponsored euthanasia". Parties that consented to the killing included Hitler's office, the parents, and the Reich Committee for the Scientific Registration of Serious and Congenitally Based Illnesses. The Telegraph noted that the killing of the disabled infant—whose name was Gerhard Kretschmar, born blind, with missing limbs, subject to convulsions, and reportedly "an idiot"— provided "the rationale for a secret Nazi decree that led to 'mercy killings' of almost 300,000 mentally and physically handicapped people". While Kretchmar's killing received parental consent, most of the 5,000 to 8,000 children killed afterwards were forcibly taken from their parents. The "euthanasia campaign" of mass murder gathered momentum on 14 January 1940 when the "handicapped" were killed with gas vans and at killing centres, eventually leading to the deaths of 70,000 adult Germans. Professor Robert Jay Lifton, author of The Nazi Doctors and a leading authority on the T4 program, contrasts this program with what he considers to be a genuine euthanasia. He explains that the Nazi version of "euthanasia" was based on the work of Adolf Jost, who published The Right to Death (Das Recht auf den Tod) in 1895. Lifton writes: Jost argued that control over the death of the individual must ultimately belong to the social organism, the state. This concept is in direct opposition to the Anglo-American concept of euthanasia, which emphasizes the individual's 'right to die' or 'right to death' or 'right to his or her own death,' as the ultimate human claim. In contrast, Jost was pointing to the state's right to kill. ... Ultimately the argument was biological: 'The rights to death [are] the key to the fitness of life.' The state must own death—must kill—in order to keep the social organism alive and healthy. In modern terms, the use of "euthanasia" in the context of Action T4 is seen to be a euphemism to disguise a program of genocide, in which people were killed on the grounds of "disabilities, religious beliefs, and discordant individual values". Compared to the discussions of euthanasia that emerged post-war, the Nazi program may have been worded in terms that appear similar to the modern use of "euthanasia", but there was no "mercy" and the patients were not necessarily terminally ill. Despite these differences, historian and euthanasia opponent Ian Dowbiggin writes that "the origins of Nazi euthanasia, like those of the American euthanasia movement, predate the Third Reich and were intertwined with the history of eugenics and Social Darwinism, and with efforts to discredit traditional morality and ethics." On 6 January 1949, the Euthanasia Society of America presented to the New York State Legislature a petition to legalize euthanasia, signed by 379 leading Protestant and Jewish ministers, the largest group of religious leaders ever to have taken this stance. A similar petition had been sent to the New York Legislature in 1947, signed by approximately 1,000 New York physicians. Roman Catholic religious leaders criticized the petition, saying that such a bill would "legalize a suicide-murder pact" and a "rationalization of the fifth commandment of God, 'Thou Shalt Not Kill.'" The Right Reverend Robert E. McCormick stated that: The ultimate object of the Euthanasia Society is based on the Totalitarian principle that the state is supreme and that the individual does not have the right to live if his continuance in life is a burden or hindrance to the state. The Nazis followed this principle and compulsory Euthanasia was practiced as a part of their program during the recent war. We American citizens of New York State must ask ourselves this question: "Are we going to finish Hitler's job?" The petition brought tensions between the American Euthanasia Society and the Catholic Church to a head that contributed to a climate of anti-Catholic sentiment generally, regarding issues such as birth control, eugenics, and population control. However, the petition did not result in any legal changes. Historically, the euthanasia debate has tended to focus on a number of key concerns. According to euthanasia opponent Ezekiel Emanuel, proponents of euthanasia have presented four main arguments: a) that people have a right to self-determination, and thus should be allowed to choose their own fate; b) assisting a subject to die might be a better choice than requiring that they continue to suffer; c) the distinction between passive euthanasia, which is often permitted, and active euthanasia, which is not substantive (or that the underlying principle–the doctrine of double effect–is unreasonable or unsound); and d) permitting euthanasia will not necessarily lead to unacceptable consequences. Pro-euthanasia activists often point to countries like the Netherlands and Belgium, and states like Oregon, where euthanasia has been legalized, to argue that it is mostly unproblematic. Similarly, Emanuel argues that there are four major arguments presented by opponents of euthanasia: a) not all deaths are painful; b) alternatives, such as cessation of active treatment, combined with the use of effective pain relief, are available; c) the distinction between active and passive euthanasia is morally significant; and d) legalising euthanasia will place society on a slippery slope, which will lead to unacceptable consequences. In fact, in Oregon, in 2013, pain was not one of the top five reasons people sought euthanasia. Top reasons were a loss of dignity, and a fear of burdening others. In the United States in 2013, 47% nationwide supported doctor-assisted suicide. This included 32% of Latinos, 29% of African-Americans. Some U.S. disability rights organizations have also opposed bills legalizing assisted suicide. A 2015 Populus poll in the United Kingdom found broad public support for assisted dying. 82% of people supported the introduction of assisted dying laws, including 86% of people with disabilities. An alternative approach to the question is seen in the hospice movement which promotes palliative care for the dying and terminally ill. This has pioneered the use of pain-relieving drugs in a holistic atmosphere in which the patient's spiritual care ranks alongside physical care. It 'intends neither to hasten nor postpone death'. West's Encyclopedia of American Law states that "a 'mercy killing' or euthanasia is generally considered to be a criminal homicide" and is normally used as a synonym of homicide committed at a request made by the patient. The judicial sense of the term "homicide" includes any intervention undertaken with the express intention of ending a life, even to relieve intractable suffering. Not all homicide is unlawful. Two designations of homicide that carry no criminal punishment are justifiable and excusable homicide. In most countries this is not the status of euthanasia. The term "euthanasia" is usually confined to the active variety; the University of Washington website states that "euthanasia generally means that the physician would act directly, for instance by giving a lethal injection, to end the patient's life". Physician-assisted suicide is thus not classified as euthanasia by the US State of Oregon, where it is legal under the Oregon Death with Dignity Act, and despite its name, it is not legally classified as suicide either. Unlike physician-assisted suicide, withholding or withdrawing life-sustaining treatments with patient consent (voluntary) is almost unanimously considered, at least in the United States, to be legal. The use of pain medication to relieve suffering, even if it hastens death, has been held as legal in several court decisions. Some governments around the world have legalized voluntary euthanasia but most commonly it is still considered to be criminal homicide. In the Netherlands and Belgium, where euthanasia has been legalized, it still remains homicide although it is not prosecuted and not punishable if the perpetrator (the doctor) meets certain legal conditions. In a historic judgment, the Supreme court of India legalized passive euthanasia. The apex court remarked in the judgment that the Constitution of India values liberty, dignity, autonomy, and privacy. A bench headed by Chief Justice Dipak Misra delivered a unanimous judgment. A 2010 survey in the United States of more than 10,000 physicians found that 16.3% of physicians would consider halting life-sustaining therapy because the family demanded it, even if they believed that it was premature. Approximately 54.5% would not, and the remaining 29.2% responded "it depends". The study also found that 45.8% of physicians agreed that physician-assisted suicide should be allowed in some cases; 40.7% did not, and the remaining 13.5% felt it depended. In the United Kingdom, the assisted dying campaign group Dignity in Dying cites research in which 54% of general practitioners support or are neutral towards a law change on assisted dying. Similarly, a 2017 Doctors.net.uk poll reported in the British Medical Journal stated that 55% of doctors believe assisted dying, in defined circumstances, should be legalised in the UK. The Roman Catholic Church condemns euthanasia and assisted suicide as morally wrong. As paragraph 2324 of the Catechism of the Catholic Church states, "Intentional euthanasia, whatever its forms or motives, is murder. It is gravely contrary to the dignity of the human person and to the respect due to the living God, his Creator". Because of this, per the Declaration on Euthanasia, the practice is unacceptable within the Church. The Orthodox Church in America, along with other Eastern Orthodox Churches, also opposes euthanasia stating that "euthanasia is the deliberate cessation of human life, and, as such, must be condemned as murder." Many non-Catholic churches in the United States take a stance against euthanasia. Among Protestant denominations, the Episcopal Church passed a resolution in 1991 opposing euthanasia and assisted suicide stating that it is "morally wrong and unacceptable to take a human life to relieve the suffering caused by incurable illnesses." Protestant and other non-Catholic churches which oppose euthanasia include: The Church of England accepts passive euthanasia under some circumstances, but is strongly against active euthanasia, and has led opposition against recent attempts to legalise it. The United Church of Canada accepts passive euthanasia under some circumstances, but is in general against active euthanasia, with growing acceptance now that active euthanasia has been partly legalised in Canada. The Waldensians take a liberal stance on Euthanasia and allow the decision to lie with individuals. Euthanasia is a complex issue in Islamic theology; however, in general it is considered contrary to Islamic law and holy texts. Among interpretations of the Qur'an and Hadith, the early termination of life is a crime, be it by suicide or helping one commit suicide. The various positions on the cessation of medical treatment are mixed and considered a different class of action than direct termination of life, especially if the patient is suffering. Suicide and euthanasia are both crimes in almost all Muslim majority countries. There is much debate on the topic of euthanasia in Judaic theology, ethics, and general opinion (especially in Israel and the United States). Passive euthanasia was declared legal by Israel's highest court under certain conditions and has reached some level of acceptance. Active euthanasia remains illegal; however, the topic is actively under debate with no clear consensus through legal, ethical, theological and spiritual perspectives.
[ { "paragraph_id": 0, "text": "Euthanasia (from Greek: εὐθανασία, lit. 'good death': εὖ, eu, 'well, good' + θάνατος, thanatos, 'death') is the practise of intentionally ending life to eliminate pain and suffering.", "title": "" }, { "paragraph_id": 1, "text": "Different countries have different euthanasia laws. The British House of Lords select committee on medical ethics defines euthanasia as \"a deliberate intervention undertaken with the express intention of ending a life to relieve intractable suffering\". In the Netherlands and Belgium, euthanasia is understood as \"termination of life by a doctor at the request of a patient\". The Dutch law, however, does not use the term 'euthanasia' but includes the concept under the broader definition of \"assisted suicide and termination of life on request\".", "title": "" }, { "paragraph_id": 2, "text": "Euthanasia is categorised in different ways, which include voluntary, non-voluntary, and involuntary. Voluntary euthanasia is when a person wishes to have their life ended and is legal in a growing number of countries. Non-voluntary euthanasia occurs when a patient's consent is unavailable and is legal in some countries under certain limited conditions, in both active and passive forms. Involuntary euthanasia, which is done without asking for consent or against the patient's will, is illegal in all countries and is usually considered murder.", "title": "" }, { "paragraph_id": 3, "text": "As of 2006, euthanasia had become the most active area of research in bioethics. In some countries, divisive public controversy occurs over the moral, ethical, and legal issues associated with euthanasia. Passive euthanasia (known as \"pulling the plug\") is legal under some circumstances in many countries. Active euthanasia, however, is legal or de facto legal in only a handful of countries (for example, Belgium, Canada, and Switzerland), which limit it to specific circumstances and require the approval of counsellors, doctors, or other specialists. In some countries—such as Nigeria, Saudi Arabia, and Pakistan—support for active euthanasia is almost nonexistent.", "title": "" }, { "paragraph_id": 4, "text": "Like other terms borrowed from history, \"euthanasia\" has had different meanings depending on usage. The first apparent usage of the term \"euthanasia\" belongs to the historian Suetonius, who described how the Emperor Augustus, \"dying quickly and without suffering in the arms of his wife, Livia, experienced the 'euthanasia' he had wished for.\" The word \"euthanasia\" was first used in a medical context by Francis Bacon in the 17th century to refer to an easy, painless, happy death, during which it was a \"physician's responsibility to alleviate the 'physical sufferings' of the body.\" Bacon referred to an \"outward euthanasia\"—the term \"outward\" he used to distinguish from a spiritual concept—the euthanasia \"which regards the preparation of the soul.\"", "title": "Definition" }, { "paragraph_id": 5, "text": "In current usage, euthanasia has been defined as the \"painless inducement of a quick death\". However, it is argued that this approach fails to properly define euthanasia, as it leaves open a number of possible actions that would meet the requirements of the definition but would not be seen as euthanasia. In particular, these include situations where a person kills another, painlessly, but for no reason beyond that of personal gain, or accidental deaths that are quick and painless but not intentional.", "title": "Definition" }, { "paragraph_id": 6, "text": "Another approach incorporates the notion of suffering into the definition. The definition offered by the Oxford English Dictionary incorporates suffering as a necessary condition with \"the painless killing of a patient suffering from an incurable and painful disease or in an irreversible coma\", This approach is included in Marvin Khol and Paul Kurtz's definition of it as \"a mode or act of inducing or permitting death painlessly as a relief from suffering\". Counterexamples can be given: such definitions may encompass killing a person suffering from an incurable disease for personal gain (such as to claim an inheritance), and commentators such as Tom Beauchamp and Arnold Davidson have argued that doing so would constitute \"murder simpliciter\" rather than euthanasia.", "title": "Definition" }, { "paragraph_id": 7, "text": "The third element incorporated into many definitions is that of intentionality: the death must be intended rather than accidental, and the intent of the action must be a \"merciful death\". Michael Wreen argued that \"the principal thing that distinguishes euthanasia from intentional killing simpliciter is the agent's motive: it must be a good motive insofar as the good of the person killed is concerned.\" Similarly, Heather Draper speaks to the importance of motive, arguing that \"the motive forms a crucial part of arguments for euthanasia, because it must be in the best interests of the person on the receiving end.\" Definitions such as those offered by the House of Lords Select committee on Medical Ethics take this path, where euthanasia is defined as \"a deliberate intervention undertaken with the express intention of ending a life, to relieve intractable suffering.\" Beauchamp and Davidson also highlight Baruch Brody's \"an act of euthanasia is one in which one person ... (A) kills another person (B) for the benefit of the second person, who actually does benefit from being killed\".", "title": "Definition" }, { "paragraph_id": 8, "text": "Draper argued that any definition of euthanasia must incorporate four elements: an agent and a subject; an intention; causal proximity, such that the actions of the agent lead to the outcome; and an outcome. Based on this, she offered a definition incorporating those elements, stating that euthanasia \"must be defined as death that results from the intention of one person to kill another person, using the most gentle and painless means possible, that is motivated solely by the best interests of the person who dies.\" Prior to Draper, Beauchamp and Davidson had also offered a definition that included these elements. Their definition specifically discounts fetuses to distinguish between abortions and euthanasia:", "title": "Definition" }, { "paragraph_id": 9, "text": "In summary, we have argued ... that the death of a human being, A, is an instance of euthanasia if and only if (1) A's death is intended by at least one other human being, B, where B is either the cause of death or a causally relevant feature of the event resulting in death (whether by action or by omission); (2) there is either sufficient current evidence for B to believe that A is acutely suffering or irreversibly comatose, or there is sufficient current evidence related to A's present condition such that one or more known causal laws supports B's belief that A will be in a condition of acute suffering or irreversible comatoseness; (3) (a) B's primary reason for intending A's death is cessation of A's (actual or predicted future) suffering or irreversible comatoseness, where B does not intend A's death for a different primary reason, though there may be other relevant reasons, and (b) there is sufficient current evidence for either A or B that causal means to A's death will not produce any more suffering than would be produced for A if B were not to intervene; (4) the causal means to the event of A's death are chosen by A or B to be as painless as possible, unless either A or B has an overriding reason for a more painful causal means, where the reason for choosing the latter causal means does not conflict with the evidence in 3b; (5) A is a nonfetal organism.", "title": "Definition" }, { "paragraph_id": 10, "text": "Wreen, in part responding to Beauchamp and Davidson, offered a six-part definition:", "title": "Definition" }, { "paragraph_id": 11, "text": "Person A committed an act of euthanasia if and only if (1) A killed B or let her die; (2) A intended to kill B; (3) the intention specified in (2) was at least partial cause of the action specified in (1); (4) the causal journey from the intention specified in (2) to the action specified in (1) is more or less in accordance with A's plan of action; (5) A's killing of B is a voluntary action; (6) the motive for the action specified in (1), the motive standing behind the intention specified in (2), is the good of the person killed.", "title": "Definition" }, { "paragraph_id": 12, "text": "Wreen also considered a seventh requirement: \"(7) The good specified in (6) is, or at least includes, the avoidance of evil\", although, as Wreen noted in the paper, he was not convinced that the restriction was required.", "title": "Definition" }, { "paragraph_id": 13, "text": "In discussing his definition, Wreen noted the difficulty of justifying euthanasia when faced with the notion of the subject's \"right to life\". In response, Wreen argued that euthanasia has to be voluntary and that \"involuntary euthanasia is, as such, a great wrong\". Other commentators incorporate consent more directly into their definitions. For example, in a discussion of euthanasia presented in 2003 by the European Association of Palliative Care (EPAC) Ethics Task Force, the authors offered: \"Medicalized killing of a person without the person's consent, whether nonvoluntary (where the person is unable to consent) or involuntary (against the person's will), is not euthanasia: it is murder. Hence, euthanasia can be voluntary only.\" Although the EPAC Ethics Task Force argued that both non-voluntary and involuntary euthanasia could not be included in the definition of euthanasia, there is discussion in the literature about excluding one but not the other.", "title": "Definition" }, { "paragraph_id": 14, "text": "Euthanasia may be classified into three types, according to whether a person gives informed consent: voluntary, non-voluntary and involuntary.", "title": "Classification" }, { "paragraph_id": 15, "text": "There is a debate within the medical and bioethics literature about whether or not the non-voluntary (and by extension, involuntary) killing of patients can be regarded as euthanasia, irrespective of intent or the patient's circumstances. In the definitions offered by Beauchamp and Davidson and, later, by Wreen, consent on the part of the patient was not considered one of their criteria, although it may have been required to justify euthanasia. However, others see consent as essential.", "title": "Classification" }, { "paragraph_id": 16, "text": "Voluntary euthanasia is conducted with the consent of the patient. Active voluntary euthanasia is legal in Belgium, Luxembourg and the Netherlands. Passive voluntary euthanasia is legal throughout the US per Cruzan v. Director, Missouri Department of Health. When the patient brings about their own death with the assistance of a physician, the term assisted suicide is often used instead. Assisted suicide is legal in Switzerland and the U.S. states of California, Oregon, Washington, Montana and Vermont.", "title": "Classification" }, { "paragraph_id": 17, "text": "Non-voluntary euthanasia is conducted when the consent of the patient is unavailable. Examples include child euthanasia, which is illegal worldwide but decriminalised under certain specific circumstances in the Netherlands under the Groningen Protocol. Passive forms of non-voluntary euthanasia (i.e. withholding treatment) are legal in a number of countries under specified conditions.", "title": "Classification" }, { "paragraph_id": 18, "text": "Involuntary euthanasia is conducted against the will of the patient.", "title": "Classification" }, { "paragraph_id": 19, "text": "Voluntary, non-voluntary and involuntary types can be further divided into passive or active variants. Passive euthanasia entails the withholding treatment necessary for the continuance of life. Active euthanasia entails the use of lethal substances or forces (such as administering a lethal injection), and is more controversial. While some authors consider these terms to be misleading and unhelpful, they are nonetheless commonly used. In some cases, such as the administration of increasingly necessary, but toxic doses of painkillers, there is a debate whether or not to regard the practice as active or passive.", "title": "Classification" }, { "paragraph_id": 20, "text": "Euthanasia was practiced in Ancient Greece and Rome: for example, hemlock was employed as a means of hastening death on the island of Kea, a technique also employed in Massalia. Euthanasia, in the sense of the deliberate hastening of a person's death, was supported by Socrates, Plato and Seneca the Elder in the ancient world, although Hippocrates appears to have spoken against the practice, writing \"I will not prescribe a deadly drug to please someone, nor give advice that may cause his death\" (noting there is some debate in the literature about whether or not this was intended to encompass euthanasia).", "title": "History" }, { "paragraph_id": 21, "text": "The term euthanasia, in the earlier sense of supporting someone as they died, was used for the first time by Francis Bacon. In his work, Euthanasia medica, he chose this ancient Greek word and, in doing so, distinguished between euthanasia interior, the preparation of the soul for death, and euthanasia exterior, which was intended to make the end of life easier and painless, in exceptional circumstances by shortening life. That the ancient meaning of an easy death came to the fore again in the early modern period can be seen from its definition in the 18th century Zedlers Universallexikon:", "title": "History" }, { "paragraph_id": 22, "text": "Euthanasia: a very gentle and quiet death, which happens without painful convulsions. The word comes from ευ, bene, well, and θανατος, mors, death.", "title": "History" }, { "paragraph_id": 23, "text": "The concept of euthanasia in the sense of alleviating the process of death goes back to the medical historian Karl Friedrich Heinrich Marx, who drew on Bacon's philosophical ideas. According to Marx, a doctor had a moral duty to ease the suffering of death through encouragement, support and mitigation using medication. Such an \"alleviation of death\" reflected the contemporary zeitgeist, but was brought into the medical canon of responsibility for the first time by Marx. Marx also stressed the distinction of the theological care of the soul of sick people from the physical care and medical treatment by doctors.", "title": "History" }, { "paragraph_id": 24, "text": "Euthanasia in its modern sense has always been strongly opposed in the Judeo-Christian tradition. Thomas Aquinas opposed both and argued that the practice of euthanasia contradicted our natural human instincts of survival, as did Francois Ranchin (1565–1641), a French physician and professor of medicine, and Michael Boudewijns (1601–1681), a physician and teacher. Other voices argued for euthanasia, such as John Donne in 1624, and euthanasia continued to be practised. In 1678, the publication of Caspar Questel's De pulvinari morientibus non-subtrahend, (\"On the pillow of which the dying should not be deprived\"), initiated debate on the topic. Questel described various customs which were employed at the time to hasten the death of the dying, (including the sudden removal of a pillow, which was believed to accelerate death), and argued against their use, as doing so was \"against the laws of God and Nature\". This view was shared by others who followed, including Philipp Jakob Spener, Veit Riedlin and Johann Georg Krünitz. Despite opposition, euthanasia continued to be practised, involving techniques such as bleeding, suffocation, and removing people from their beds to be placed on the cold ground.", "title": "History" }, { "paragraph_id": 25, "text": "Suicide and euthanasia became more accepted during the Age of Enlightenment. Thomas More wrote of euthanasia in Utopia, although it is not clear if More was intending to endorse the practice. Other cultures have taken different approaches: for example, in Japan suicide has not traditionally been viewed as a sin, as it is used in cases of honor, and accordingly, the perceptions of euthanasia are different from those in other parts of the world.", "title": "History" }, { "paragraph_id": 26, "text": "In the mid-1800s, the use of morphine to treat \"the pains of death\" emerged, with John Warren recommending its use in 1848. A similar use of chloroform was revealed by Joseph Bullar in 1866. However, in neither case was it recommended that the use should be to hasten death. In 1870 Samuel Williams, a schoolteacher, initiated the contemporary euthanasia debate through a speech given at the Birmingham Speculative Club in England, which was subsequently published in a one-off publication entitled Essays of the Birmingham Speculative Club, the collected works of a number of members of an amateur philosophical society. Williams' proposal was to use chloroform to deliberately hasten the death of terminally ill patients:", "title": "History" }, { "paragraph_id": 27, "text": "That in all cases of hopeless and painful illness, it should be the recognized duty of the medical attendant, whenever so desired by the patient, to administer chloroform or such other anaesthetic as may by-and-bye supersede chloroform – so as to destroy consciousness at once, and put the sufferer to a quick and painless death; all needful precautions being adopted to prevent any possible abuse of such duty; and means being taken to establish, beyond the possibility of doubt or question, that the remedy was applied at the express wish of the patient.", "title": "History" }, { "paragraph_id": 28, "text": "The essay was favourably reviewed in The Saturday Review, but an editorial against the essay appeared in The Spectator. From there it proved to be influential, and other writers came out in support of such views: Lionel Tollemache wrote in favour of euthanasia, as did Annie Besant, the essayist and reformer who later became involved with the National Secular Society, considering it a duty to society to \"die voluntarily and painlessly\" when one reaches the point of becoming a 'burden'. Popular Science analyzed the issue in May 1873, assessing both sides of the argument. Kemp notes that at the time, medical doctors did not participate in the discussion; it was \"essentially a philosophical enterprise ... tied inextricably to a number of objections to the Christian doctrine of the sanctity of human life\".", "title": "History" }, { "paragraph_id": 29, "text": "The rise of the euthanasia movement in the United States coincided with the so-called Gilded Age, a time of social and technological change that encompassed an \"individualistic conservatism that praised laissez-faire economics, scientific method, and rationalism\", along with major depressions, industrialisation and conflict between corporations and labour unions. It was also the period in which the modern hospital system was developed, which has been seen as a factor in the emergence of the euthanasia debate.", "title": "History" }, { "paragraph_id": 30, "text": "Robert Ingersoll argued for euthanasia, stating in 1894 that where someone is suffering from a terminal illness, such as terminal cancer, they should have a right to end their pain through suicide. Felix Adler offered a similar approach, although, unlike Ingersoll, Adler did not reject religion. In fact, he argued from an Ethical Culture framework. In 1891, Adler argued that those suffering from overwhelming pain should have the right to commit suicide, and, furthermore, that it should be permissible for a doctor to assist – thus making Adler the first \"prominent American\" to argue for suicide in cases where people were suffering from chronic illness. Both Ingersoll and Adler argued for voluntary euthanasia of adults suffering from terminal ailments. Dowbiggin argues that by breaking down prior moral objections to euthanasia and suicide, Ingersoll and Adler enabled others to stretch the definition of euthanasia.", "title": "History" }, { "paragraph_id": 31, "text": "The first attempt to legalise euthanasia took place in the United States, when Henry Hunt introduced legislation into the General Assembly of Ohio in 1906. Hunt did so at the behest of Anna Sophina Hall, a wealthy heiress who was a major figure in the euthanasia movement during the early 20th century in the United States. Hall had watched her mother die after an extended battle with liver cancer, and had dedicated herself to ensuring that others would not have to endure the same suffering. Towards this end she engaged in an extensive letter writing campaign, recruited Lurana Sheldon and Maud Ballington Booth, and organised a debate on euthanasia at the annual meeting of the American Humane Association in 1905 – described by Jacob Appel as the first significant public debate on the topic in the 20th century.", "title": "History" }, { "paragraph_id": 32, "text": "Hunt's bill called for the administration of an anesthetic to bring about a patient's death, so long as the person is of lawful age and sound mind, and was suffering from a fatal injury, an irrevocable illness, or great physical pain. It also required that the case be heard by a physician, required informed consent in front of three witnesses, and required the attendance of three physicians who had to agree that the patient's recovery was impossible. A motion to reject the bill outright was voted down, but the bill failed to pass, 79 to 23.", "title": "History" }, { "paragraph_id": 33, "text": "Along with the Ohio euthanasia proposal, in 1906 Assemblyman Ross Gregory introduced a proposal to permit euthanasia to the Iowa legislature. However, the Iowa legislation was broader in scope than that offered in Ohio. It allowed for the death of any person of at least ten years of age who suffered from an ailment that would prove fatal and cause extreme pain, should they be of sound mind and express a desire to artificially hasten their death. In addition, it allowed for infants to be euthanised if they were sufficiently deformed, and permitted guardians to request euthanasia on behalf of their wards. The proposed legislation also imposed penalties on physicians who refused to perform euthanasia when requested: a 6–12-month prison term and a fine of between $200 and $1,000. The proposal proved to be controversial. It engendered considerable debate and failed to pass, having been withdrawn from consideration after being passed to the Committee on Public Health.", "title": "History" }, { "paragraph_id": 34, "text": "After 1906 the euthanasia debate reduced in intensity, resurfacing periodically, but not returning to the same level of debate until the 1930s in the United Kingdom.", "title": "History" }, { "paragraph_id": 35, "text": "Euthanasia opponent Ian Dowbiggin argues that the early membership of the Euthanasia Society of America (ESA) reflected how many perceived euthanasia at the time, often seeing it as a eugenics matter rather than an issue concerning individual rights. Dowbiggin argues that not every eugenist joined the ESA \"solely for eugenic reasons\", but he postulates that there were clear ideological connections between the eugenics and euthanasia movements.", "title": "History" }, { "paragraph_id": 36, "text": "The Voluntary Euthanasia Legalisation Society was founded in 1935 by Charles Killick Millard (now called Dignity in Dying). The movement campaigned for the legalisation of euthanasia in Great Britain.", "title": "History" }, { "paragraph_id": 37, "text": "In January 1936, King George V was given a fatal dose of morphine and cocaine to hasten his death. At the time he was suffering from cardio-respiratory failure, and the decision to end his life was made by his physician, Lord Dawson. Although this event was kept a secret for over 50 years, the death of George V coincided with proposed legislation in the House of Lords to legalise euthanasia.", "title": "History" }, { "paragraph_id": 38, "text": "A 24 July 1939 killing of a severely disabled infant in Nazi Germany was described in a BBC \"Genocide Under the Nazis Timeline\" as the first \"state-sponsored euthanasia\". Parties that consented to the killing included Hitler's office, the parents, and the Reich Committee for the Scientific Registration of Serious and Congenitally Based Illnesses. The Telegraph noted that the killing of the disabled infant—whose name was Gerhard Kretschmar, born blind, with missing limbs, subject to convulsions, and reportedly \"an idiot\"— provided \"the rationale for a secret Nazi decree that led to 'mercy killings' of almost 300,000 mentally and physically handicapped people\". While Kretchmar's killing received parental consent, most of the 5,000 to 8,000 children killed afterwards were forcibly taken from their parents.", "title": "History" }, { "paragraph_id": 39, "text": "The \"euthanasia campaign\" of mass murder gathered momentum on 14 January 1940 when the \"handicapped\" were killed with gas vans and at killing centres, eventually leading to the deaths of 70,000 adult Germans. Professor Robert Jay Lifton, author of The Nazi Doctors and a leading authority on the T4 program, contrasts this program with what he considers to be a genuine euthanasia. He explains that the Nazi version of \"euthanasia\" was based on the work of Adolf Jost, who published The Right to Death (Das Recht auf den Tod) in 1895. Lifton writes:", "title": "History" }, { "paragraph_id": 40, "text": "Jost argued that control over the death of the individual must ultimately belong to the social organism, the state. This concept is in direct opposition to the Anglo-American concept of euthanasia, which emphasizes the individual's 'right to die' or 'right to death' or 'right to his or her own death,' as the ultimate human claim. In contrast, Jost was pointing to the state's right to kill. ... Ultimately the argument was biological: 'The rights to death [are] the key to the fitness of life.' The state must own death—must kill—in order to keep the social organism alive and healthy.", "title": "History" }, { "paragraph_id": 41, "text": "In modern terms, the use of \"euthanasia\" in the context of Action T4 is seen to be a euphemism to disguise a program of genocide, in which people were killed on the grounds of \"disabilities, religious beliefs, and discordant individual values\". Compared to the discussions of euthanasia that emerged post-war, the Nazi program may have been worded in terms that appear similar to the modern use of \"euthanasia\", but there was no \"mercy\" and the patients were not necessarily terminally ill. Despite these differences, historian and euthanasia opponent Ian Dowbiggin writes that \"the origins of Nazi euthanasia, like those of the American euthanasia movement, predate the Third Reich and were intertwined with the history of eugenics and Social Darwinism, and with efforts to discredit traditional morality and ethics.\"", "title": "History" }, { "paragraph_id": 42, "text": "On 6 January 1949, the Euthanasia Society of America presented to the New York State Legislature a petition to legalize euthanasia, signed by 379 leading Protestant and Jewish ministers, the largest group of religious leaders ever to have taken this stance. A similar petition had been sent to the New York Legislature in 1947, signed by approximately 1,000 New York physicians. Roman Catholic religious leaders criticized the petition, saying that such a bill would \"legalize a suicide-murder pact\" and a \"rationalization of the fifth commandment of God, 'Thou Shalt Not Kill.'\" The Right Reverend Robert E. McCormick stated that:", "title": "History" }, { "paragraph_id": 43, "text": "The ultimate object of the Euthanasia Society is based on the Totalitarian principle that the state is supreme and that the individual does not have the right to live if his continuance in life is a burden or hindrance to the state. The Nazis followed this principle and compulsory Euthanasia was practiced as a part of their program during the recent war. We American citizens of New York State must ask ourselves this question: \"Are we going to finish Hitler's job?\"", "title": "History" }, { "paragraph_id": 44, "text": "The petition brought tensions between the American Euthanasia Society and the Catholic Church to a head that contributed to a climate of anti-Catholic sentiment generally, regarding issues such as birth control, eugenics, and population control. However, the petition did not result in any legal changes.", "title": "History" }, { "paragraph_id": 45, "text": "Historically, the euthanasia debate has tended to focus on a number of key concerns. According to euthanasia opponent Ezekiel Emanuel, proponents of euthanasia have presented four main arguments: a) that people have a right to self-determination, and thus should be allowed to choose their own fate; b) assisting a subject to die might be a better choice than requiring that they continue to suffer; c) the distinction between passive euthanasia, which is often permitted, and active euthanasia, which is not substantive (or that the underlying principle–the doctrine of double effect–is unreasonable or unsound); and d) permitting euthanasia will not necessarily lead to unacceptable consequences. Pro-euthanasia activists often point to countries like the Netherlands and Belgium, and states like Oregon, where euthanasia has been legalized, to argue that it is mostly unproblematic.", "title": "Debate" }, { "paragraph_id": 46, "text": "Similarly, Emanuel argues that there are four major arguments presented by opponents of euthanasia: a) not all deaths are painful; b) alternatives, such as cessation of active treatment, combined with the use of effective pain relief, are available; c) the distinction between active and passive euthanasia is morally significant; and d) legalising euthanasia will place society on a slippery slope, which will lead to unacceptable consequences. In fact, in Oregon, in 2013, pain was not one of the top five reasons people sought euthanasia. Top reasons were a loss of dignity, and a fear of burdening others.", "title": "Debate" }, { "paragraph_id": 47, "text": "In the United States in 2013, 47% nationwide supported doctor-assisted suicide. This included 32% of Latinos, 29% of African-Americans. Some U.S. disability rights organizations have also opposed bills legalizing assisted suicide.", "title": "Debate" }, { "paragraph_id": 48, "text": "A 2015 Populus poll in the United Kingdom found broad public support for assisted dying. 82% of people supported the introduction of assisted dying laws, including 86% of people with disabilities.", "title": "Debate" }, { "paragraph_id": 49, "text": "An alternative approach to the question is seen in the hospice movement which promotes palliative care for the dying and terminally ill. This has pioneered the use of pain-relieving drugs in a holistic atmosphere in which the patient's spiritual care ranks alongside physical care. It 'intends neither to hasten nor postpone death'.", "title": "Debate" }, { "paragraph_id": 50, "text": "West's Encyclopedia of American Law states that \"a 'mercy killing' or euthanasia is generally considered to be a criminal homicide\" and is normally used as a synonym of homicide committed at a request made by the patient.", "title": "Legal status" }, { "paragraph_id": 51, "text": "The judicial sense of the term \"homicide\" includes any intervention undertaken with the express intention of ending a life, even to relieve intractable suffering. Not all homicide is unlawful. Two designations of homicide that carry no criminal punishment are justifiable and excusable homicide. In most countries this is not the status of euthanasia. The term \"euthanasia\" is usually confined to the active variety; the University of Washington website states that \"euthanasia generally means that the physician would act directly, for instance by giving a lethal injection, to end the patient's life\". Physician-assisted suicide is thus not classified as euthanasia by the US State of Oregon, where it is legal under the Oregon Death with Dignity Act, and despite its name, it is not legally classified as suicide either. Unlike physician-assisted suicide, withholding or withdrawing life-sustaining treatments with patient consent (voluntary) is almost unanimously considered, at least in the United States, to be legal. The use of pain medication to relieve suffering, even if it hastens death, has been held as legal in several court decisions.", "title": "Legal status" }, { "paragraph_id": 52, "text": "Some governments around the world have legalized voluntary euthanasia but most commonly it is still considered to be criminal homicide. In the Netherlands and Belgium, where euthanasia has been legalized, it still remains homicide although it is not prosecuted and not punishable if the perpetrator (the doctor) meets certain legal conditions.", "title": "Legal status" }, { "paragraph_id": 53, "text": "In a historic judgment, the Supreme court of India legalized passive euthanasia. The apex court remarked in the judgment that the Constitution of India values liberty, dignity, autonomy, and privacy. A bench headed by Chief Justice Dipak Misra delivered a unanimous judgment.", "title": "Legal status" }, { "paragraph_id": 54, "text": "A 2010 survey in the United States of more than 10,000 physicians found that 16.3% of physicians would consider halting life-sustaining therapy because the family demanded it, even if they believed that it was premature. Approximately 54.5% would not, and the remaining 29.2% responded \"it depends\". The study also found that 45.8% of physicians agreed that physician-assisted suicide should be allowed in some cases; 40.7% did not, and the remaining 13.5% felt it depended.", "title": "Health professionals' sentiment" }, { "paragraph_id": 55, "text": "In the United Kingdom, the assisted dying campaign group Dignity in Dying cites research in which 54% of general practitioners support or are neutral towards a law change on assisted dying. Similarly, a 2017 Doctors.net.uk poll reported in the British Medical Journal stated that 55% of doctors believe assisted dying, in defined circumstances, should be legalised in the UK.", "title": "Health professionals' sentiment" }, { "paragraph_id": 56, "text": "The Roman Catholic Church condemns euthanasia and assisted suicide as morally wrong. As paragraph 2324 of the Catechism of the Catholic Church states, \"Intentional euthanasia, whatever its forms or motives, is murder. It is gravely contrary to the dignity of the human person and to the respect due to the living God, his Creator\". Because of this, per the Declaration on Euthanasia, the practice is unacceptable within the Church. The Orthodox Church in America, along with other Eastern Orthodox Churches, also opposes euthanasia stating that \"euthanasia is the deliberate cessation of human life, and, as such, must be condemned as murder.\"", "title": "Religious views" }, { "paragraph_id": 57, "text": "Many non-Catholic churches in the United States take a stance against euthanasia. Among Protestant denominations, the Episcopal Church passed a resolution in 1991 opposing euthanasia and assisted suicide stating that it is \"morally wrong and unacceptable to take a human life to relieve the suffering caused by incurable illnesses.\" Protestant and other non-Catholic churches which oppose euthanasia include:", "title": "Religious views" }, { "paragraph_id": 58, "text": "The Church of England accepts passive euthanasia under some circumstances, but is strongly against active euthanasia, and has led opposition against recent attempts to legalise it. The United Church of Canada accepts passive euthanasia under some circumstances, but is in general against active euthanasia, with growing acceptance now that active euthanasia has been partly legalised in Canada. The Waldensians take a liberal stance on Euthanasia and allow the decision to lie with individuals.", "title": "Religious views" }, { "paragraph_id": 59, "text": "Euthanasia is a complex issue in Islamic theology; however, in general it is considered contrary to Islamic law and holy texts. Among interpretations of the Qur'an and Hadith, the early termination of life is a crime, be it by suicide or helping one commit suicide. The various positions on the cessation of medical treatment are mixed and considered a different class of action than direct termination of life, especially if the patient is suffering. Suicide and euthanasia are both crimes in almost all Muslim majority countries.", "title": "Religious views" }, { "paragraph_id": 60, "text": "There is much debate on the topic of euthanasia in Judaic theology, ethics, and general opinion (especially in Israel and the United States). Passive euthanasia was declared legal by Israel's highest court under certain conditions and has reached some level of acceptance. Active euthanasia remains illegal; however, the topic is actively under debate with no clear consensus through legal, ethical, theological and spiritual perspectives.", "title": "Religious views" } ]
Euthanasia is the practise of intentionally ending life to eliminate pain and suffering. Different countries have different euthanasia laws. The British House of Lords select committee on medical ethics defines euthanasia as "a deliberate intervention undertaken with the express intention of ending a life to relieve intractable suffering". In the Netherlands and Belgium, euthanasia is understood as "termination of life by a doctor at the request of a patient". The Dutch law, however, does not use the term 'euthanasia' but includes the concept under the broader definition of "assisted suicide and termination of life on request". Euthanasia is categorised in different ways, which include voluntary, non-voluntary, and involuntary. Voluntary euthanasia is when a person wishes to have their life ended and is legal in a growing number of countries. Non-voluntary euthanasia occurs when a patient's consent is unavailable and is legal in some countries under certain limited conditions, in both active and passive forms. Involuntary euthanasia, which is done without asking for consent or against the patient's will, is illegal in all countries and is usually considered murder. As of 2006, euthanasia had become the most active area of research in bioethics. In some countries, divisive public controversy occurs over the moral, ethical, and legal issues associated with euthanasia. Passive euthanasia is legal under some circumstances in many countries. Active euthanasia, however, is legal or de facto legal in only a handful of countries, which limit it to specific circumstances and require the approval of counsellors, doctors, or other specialists. In some countries—such as Nigeria, Saudi Arabia, and Pakistan—support for active euthanasia is almost nonexistent.
2001-10-14T13:29:54Z
2023-12-29T01:02:54Z
[ "Template:'\"", "Template:Look from", "Template:Cite web", "Template:Short description", "Template:Blockquote", "Template:NDB", "Template:Commons category-inline", "Template:Wiktionary inline", "Template:American social conservatism", "Template:Pp-semi-protected", "Template:Use dmy dates", "Template:Globalize", "Template:Cite journal", "Template:Main", "Template:Circa", "Template:Div col", "Template:Cite book", "Template:Cite encyclopedia", "Template:Wikiquote inline", "Template:About", "Template:Lang-el", "Template:Authority control", "Template:Legend", "Template:Suicide navbox", "Template:Euthanasia", "Template:As of", "Template:Reflist", "Template:Webarchive", "Template:Wikinews category", "Template:Homicide", "Template:Rp", "Template:Death", "Template:Div col end", "Template:Cite news", "Template:Use British English", "Template:See also" ]
https://en.wikipedia.org/wiki/Euthanasia
9,588
Extraterrestrial life
Extraterrestrial life or alien life is life which does not originate from Earth. No extraterrestrial life has yet been conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more advanced than humanity. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about the possibility of inhabited worlds beyond Earth dates back to antiquity. Early Christian writers discussed the idea of a "plurality of worlds" as proposed by earlier thinkers such as Democritus; Augustine references Epicurus's idea of innumerable worlds "throughout the boundless immensity of space" (originally expressed in his Letter to Herodotus) in The City of God. In his first-century poem De rerum natura (Book 2:1048–1076), the Epicurean philosopher Lucretius predicted that humanity would find innumerable exoplanets with life-forms similar to, and different from, the ones on Earth, and even other races of man. Pre-modern writers typically assumed extraterrestrial "worlds" are inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility that Jesus could have visited extraterrestrial worlds to redeem their inhabitants. Nicholas of Cusa wrote in 1440 that Earth is "a brilliant star" like other celestial objects visible in space; which would appear similar to the Sun from an exterior perspective due to a layer of "fiery brightness" in the outer layer of the atmosphere. He theorised all extraterrestrial bodies could be inhabited by men, plants, and animals, including the Sun. Descartes wrote that there was no means to prove that the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation. The writings of these individuals demonstrate interest in extraterrestrial life has existed throughout history, although only recently have humans have had any means of investigating it. Since the mid-20th century, active research has taken place to look for signs of extraterrestrial life, encompassing searches for current and historic extraterrestrial life, and a narrower search for extraterrestrial intelligent life. Depending on the category of search, methods range from the analysis of telescope and specimen data to radios used to detect and transmit communications. The concept of extraterrestrial life, and particularly extraterrestrial intelligence, has had a major cultural impact, especially extraterrestrials in fiction. Science fiction has communicated scientific ideas, imagined a wide range of possibilities, and influenced public interest in and perspectives on extraterrestrial life. One shared space is the debate over the wisdom of attempting communication with extraterrestrial intelligence. Some encourage aggressive methods to try to contact intelligent extraterrestrial life. Others – citing the tendency of technologically advanced human societies to enslave or destroy less advanced societies – argue it may be dangerous to actively draw attention to Earth. If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. According to the Big Bang interpretations, the universe as a whole was initially too hot to allow life. 15 million years later, it cooled to temperate levels, but the elements that make up living things did not exist yet. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell in it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread – by meteoroids, for example – between habitable planets in a process called panspermia. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", where water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as a rock. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the habitable zone of the Solar System but does not have liquid water because of the conditions of its atmosphere. Jovian planets or Gas Giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation, and may have stricter requirements. A planet or moon may not have any life on it, even if it was habitable. It is unclear if life and intelligent life are ubiquitous in the cosmos or rare. The hypothesis of ubiquitous extraterrestrial life relies on the vast size and consistent physical laws of the observable universe. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may be actually rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that all such requirements are simultaneously met by another planet. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life, and that at this point it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilisations in the Milky Way galaxy. The Drake equation is: where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten per cent of all Sun-like stars have a system of planets, i.e. there are 6.25×10 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life, giving a potential explanation to the Fermi paradox. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanos, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A potential replacement for carbon should be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds; two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life on Earth started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to the DNA and proteins. Extraterrestrial life may still be stuck on the RNA world, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesisers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial lifeforms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient lifeforms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017, 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of technosignatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson-spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (5,550 planets in 4,089 planetary systems including 887 multiple planetary systems as of 1 December 2023). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years. The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone, with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014, the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. The Greek scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would made it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets spin around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was trialed for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which trialed and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotlean ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. The idea of life on Mars led British writer H. G. Wells to write the novel The War of the Worlds in 1897, telling of an invasion by aliens from Mars who were fleeing the planet's desiccation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced to investigate the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imagined by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial lifeforms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe. In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN doesn't have response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments.
[ { "paragraph_id": 0, "text": "Extraterrestrial life or alien life is life which does not originate from Earth. No extraterrestrial life has yet been conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more advanced than humanity. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology.", "title": "" }, { "paragraph_id": 1, "text": "Speculation about the possibility of inhabited worlds beyond Earth dates back to antiquity. Early Christian writers discussed the idea of a \"plurality of worlds\" as proposed by earlier thinkers such as Democritus; Augustine references Epicurus's idea of innumerable worlds \"throughout the boundless immensity of space\" (originally expressed in his Letter to Herodotus) in The City of God. In his first-century poem De rerum natura (Book 2:1048–1076), the Epicurean philosopher Lucretius predicted that humanity would find innumerable exoplanets with life-forms similar to, and different from, the ones on Earth, and even other races of man.", "title": "" }, { "paragraph_id": 2, "text": "Pre-modern writers typically assumed extraterrestrial \"worlds\" are inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility that Jesus could have visited extraterrestrial worlds to redeem their inhabitants. Nicholas of Cusa wrote in 1440 that Earth is \"a brilliant star\" like other celestial objects visible in space; which would appear similar to the Sun from an exterior perspective due to a layer of \"fiery brightness\" in the outer layer of the atmosphere. He theorised all extraterrestrial bodies could be inhabited by men, plants, and animals, including the Sun. Descartes wrote that there was no means to prove that the stars were not inhabited by \"intelligent creatures\", but their existence was a matter of speculation. The writings of these individuals demonstrate interest in extraterrestrial life has existed throughout history, although only recently have humans have had any means of investigating it.", "title": "" }, { "paragraph_id": 3, "text": "Since the mid-20th century, active research has taken place to look for signs of extraterrestrial life, encompassing searches for current and historic extraterrestrial life, and a narrower search for extraterrestrial intelligent life. Depending on the category of search, methods range from the analysis of telescope and specimen data to radios used to detect and transmit communications.", "title": "" }, { "paragraph_id": 4, "text": "The concept of extraterrestrial life, and particularly extraterrestrial intelligence, has had a major cultural impact, especially extraterrestrials in fiction. Science fiction has communicated scientific ideas, imagined a wide range of possibilities, and influenced public interest in and perspectives on extraterrestrial life. One shared space is the debate over the wisdom of attempting communication with extraterrestrial intelligence. Some encourage aggressive methods to try to contact intelligent extraterrestrial life. Others – citing the tendency of technologically advanced human societies to enslave or destroy less advanced societies – argue it may be dangerous to actively draw attention to Earth.", "title": "" }, { "paragraph_id": 5, "text": "If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist.", "title": "Context" }, { "paragraph_id": 6, "text": "According to the Big Bang interpretations, the universe as a whole was initially too hot to allow life. 15 million years later, it cooled to temperate levels, but the elements that make up living things did not exist yet. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell in it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread – by meteoroids, for example – between habitable planets in a process called panspermia.", "title": "Context" }, { "paragraph_id": 7, "text": "There is an area around a star, the circumstellar habitable zone or \"Goldilocks zone\", where water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as a rock. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the habitable zone of the Solar System but does not have liquid water because of the conditions of its atmosphere. Jovian planets or Gas Giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution.", "title": "Context" }, { "paragraph_id": 8, "text": "Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation, and may have stricter requirements. A planet or moon may not have any life on it, even if it was habitable.", "title": "Context" }, { "paragraph_id": 9, "text": "It is unclear if life and intelligent life are ubiquitous in the cosmos or rare. The hypothesis of ubiquitous extraterrestrial life relies on the vast size and consistent physical laws of the observable universe. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth.", "title": "Likelihood of existence" }, { "paragraph_id": 10, "text": "Other authors consider instead that life in the cosmos, or at least multicellular life, may be actually rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that all such requirements are simultaneously met by another planet. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life, and that at this point it is just a desired result and not a reasonable scientific explanation for any gathered data.", "title": "Likelihood of existence" }, { "paragraph_id": 11, "text": "In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilisations in the Milky Way galaxy. The Drake equation is:", "title": "Likelihood of existence" }, { "paragraph_id": 12, "text": "where:", "title": "Likelihood of existence" }, { "paragraph_id": 13, "text": "and", "title": "Likelihood of existence" }, { "paragraph_id": 14, "text": "Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution:", "title": "Likelihood of existence" }, { "paragraph_id": 15, "text": "10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\\displaystyle 10{,}000=5\\cdot 0.5\\cdot 2\\cdot 1\\cdot 0.2\\cdot 1\\cdot 10{,}000}", "title": "Likelihood of existence" }, { "paragraph_id": 16, "text": "The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation.", "title": "Likelihood of existence" }, { "paragraph_id": 17, "text": "Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten per cent of all Sun-like stars have a system of planets, i.e. there are 6.25×10 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets.", "title": "Likelihood of existence" }, { "paragraph_id": 18, "text": "The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life, giving a potential explanation to the Fermi paradox.", "title": "Likelihood of existence" }, { "paragraph_id": 19, "text": "The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanos, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones.", "title": "Biochemical basis" }, { "paragraph_id": 20, "text": "Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane.", "title": "Biochemical basis" }, { "paragraph_id": 21, "text": "Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A potential replacement for carbon should be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds; two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place.", "title": "Biochemical basis" }, { "paragraph_id": 22, "text": "Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life on Earth started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to the DNA and proteins. Extraterrestrial life may still be stuck on the RNA world, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it.", "title": "Biochemical basis" }, { "paragraph_id": 23, "text": "The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesisers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches.", "title": "Biochemical basis" }, { "paragraph_id": 24, "text": "The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now.", "title": "Planetary habitability in the Solar System" }, { "paragraph_id": 25, "text": "The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial lifeforms may still survive in high-altitude clouds.", "title": "Planetary habitability in the Solar System" }, { "paragraph_id": 26, "text": "Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient lifeforms may still have left fossilised remains, and microbes may still survive deep underground.", "title": "Planetary habitability in the Solar System" }, { "paragraph_id": 27, "text": "As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely.", "title": "Planetary habitability in the Solar System" }, { "paragraph_id": 28, "text": "Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane.", "title": "Planetary habitability in the Solar System" }, { "paragraph_id": 29, "text": "Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study.", "title": "Planetary habitability in the Solar System" }, { "paragraph_id": 30, "text": "The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences.", "title": "Scientific search" }, { "paragraph_id": 31, "text": "The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017, 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported.", "title": "Scientific search" }, { "paragraph_id": 32, "text": "Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology.", "title": "Scientific search" }, { "paragraph_id": 33, "text": "An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis.", "title": "Scientific search" }, { "paragraph_id": 34, "text": "In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions.", "title": "Scientific search" }, { "paragraph_id": 35, "text": "In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012.", "title": "Scientific search" }, { "paragraph_id": 36, "text": "A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis.", "title": "Scientific search" }, { "paragraph_id": 37, "text": "In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds (\"amorphous organic solids with a mixed aromatic-aliphatic structure\") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. \"If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life.\"", "title": "Scientific search" }, { "paragraph_id": 38, "text": "In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation.", "title": "Scientific search" }, { "paragraph_id": 39, "text": "In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, \"these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life.\"", "title": "Scientific search" }, { "paragraph_id": 40, "text": "Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of technosignatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres.", "title": "Scientific search" }, { "paragraph_id": 41, "text": "Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message.", "title": "Scientific search" }, { "paragraph_id": 42, "text": "The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it.", "title": "Scientific search" }, { "paragraph_id": 43, "text": "The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson-spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products.", "title": "Scientific search" }, { "paragraph_id": 44, "text": "Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (5,550 planets in 4,089 planetary systems including 887 multiple planetary systems as of 1 December 2023). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years. The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives.", "title": "Scientific search" }, { "paragraph_id": 45, "text": "There is at least one planet on average per star. About 1 in 5 Sun-like stars have an \"Earth-sized\" planet in the habitable zone, with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions.", "title": "Scientific search" }, { "paragraph_id": 46, "text": "The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus.", "title": "Scientific search" }, { "paragraph_id": 47, "text": "As of March 2014, the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life.", "title": "Scientific search" }, { "paragraph_id": 48, "text": "One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs.", "title": "Scientific search" }, { "paragraph_id": 49, "text": "The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. The Greek scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would made it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe.", "title": "History and cultural impact" }, { "paragraph_id": 50, "text": "Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous \"worlds\" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple \"worlds\" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds.", "title": "History and cultural impact" }, { "paragraph_id": 51, "text": "The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself.", "title": "History and cultural impact" }, { "paragraph_id": 52, "text": "The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere.", "title": "History and cultural impact" }, { "paragraph_id": 53, "text": "By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets spin around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special.", "title": "History and cultural impact" }, { "paragraph_id": 54, "text": "The new ideas were met with resistance from the Catholic church. Galileo was trialed for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds \"have no less virtue nor a nature different to that of our earth\" and, like Earth, \"contain animals and inhabitants\". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which trialed and executed him.", "title": "History and cultural impact" }, { "paragraph_id": 55, "text": "The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way.", "title": "History and cultural impact" }, { "paragraph_id": 56, "text": "There was very little actual discussion about extraterrestrial life before this point, as the Aristotlean ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate.", "title": "History and cultural impact" }, { "paragraph_id": 57, "text": "The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed \"cosmic pluralism\" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants.", "title": "History and cultural impact" }, { "paragraph_id": 58, "text": "Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. The idea of life on Mars led British writer H. G. Wells to write the novel The War of the Worlds in 1897, telling of an invasion by aliens from Mars who were fleeing the planet's desiccation.", "title": "History and cultural impact" }, { "paragraph_id": 59, "text": "Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis.", "title": "History and cultural impact" }, { "paragraph_id": 60, "text": "As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced to investigate the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term \"panspermia\" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903).", "title": "History and cultural impact" }, { "paragraph_id": 61, "text": "The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imagined by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site.", "title": "History and cultural impact" }, { "paragraph_id": 62, "text": "The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial lifeforms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse.", "title": "History and cultural impact" }, { "paragraph_id": 63, "text": "The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes.", "title": "History and cultural impact" }, { "paragraph_id": 64, "text": "By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does.", "title": "History and cultural impact" }, { "paragraph_id": 65, "text": "Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, \"All we know for sure is that the sky is not littered with powerful microwave transmitters\". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate.", "title": "History and cultural impact" }, { "paragraph_id": 66, "text": "On the other hand, other scientists are pessimistic. Jacques Monod wrote that \"Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance\". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe. In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon.", "title": "History and cultural impact" }, { "paragraph_id": 67, "text": "As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. \"If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans\", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a \"worldwide scientific, political and humanitarian discussion must occur before any message is sent\".", "title": "History and cultural impact" }, { "paragraph_id": 68, "text": "The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN doesn't have response mechanisms for the case of an extraterrestrial contact.", "title": "Government responses" }, { "paragraph_id": 69, "text": "One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to \"rigorously preclude backward contamination of Earth by extraterrestrial life.\"", "title": "Government responses" }, { "paragraph_id": 70, "text": "In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program.", "title": "Government responses" }, { "paragraph_id": 71, "text": "In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System.", "title": "Government responses" }, { "paragraph_id": 72, "text": "The French space agency has an office for the study of \"non-identified aero spatial phenomena\". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied.", "title": "Government responses" }, { "paragraph_id": 73, "text": "In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is \"quite large\". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments.", "title": "Government responses" }, { "paragraph_id": 74, "text": "", "title": "External links" } ]
Extraterrestrial life or alien life is life which does not originate from Earth. No extraterrestrial life has yet been conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more advanced than humanity. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about the possibility of inhabited worlds beyond Earth dates back to antiquity. Early Christian writers discussed the idea of a "plurality of worlds" as proposed by earlier thinkers such as Democritus; Augustine references Epicurus's idea of innumerable worlds "throughout the boundless immensity of space" in The City of God. In his first-century poem De rerum natura, the Epicurean philosopher Lucretius predicted that humanity would find innumerable exoplanets with life-forms similar to, and different from, the ones on Earth, and even other races of man. Pre-modern writers typically assumed extraterrestrial "worlds" are inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility that Jesus could have visited extraterrestrial worlds to redeem their inhabitants. Nicholas of Cusa wrote in 1440 that Earth is "a brilliant star" like other celestial objects visible in space; which would appear similar to the Sun from an exterior perspective due to a layer of "fiery brightness" in the outer layer of the atmosphere. He theorised all extraterrestrial bodies could be inhabited by men, plants, and animals, including the Sun. Descartes wrote that there was no means to prove that the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation. The writings of these individuals demonstrate interest in extraterrestrial life has existed throughout history, although only recently have humans have had any means of investigating it. Since the mid-20th century, active research has taken place to look for signs of extraterrestrial life, encompassing searches for current and historic extraterrestrial life, and a narrower search for extraterrestrial intelligent life. Depending on the category of search, methods range from the analysis of telescope and specimen data to radios used to detect and transmit communications. The concept of extraterrestrial life, and particularly extraterrestrial intelligence, has had a major cultural impact, especially extraterrestrials in fiction. Science fiction has communicated scientific ideas, imagined a wide range of possibilities, and influenced public interest in and perspectives on extraterrestrial life. One shared space is the debate over the wisdom of attempting communication with extraterrestrial intelligence. Some encourage aggressive methods to try to contact intelligent extraterrestrial life. Others – citing the tendency of technologically advanced human societies to enslave or destroy less advanced societies – argue it may be dangerous to actively draw attention to Earth.
2001-10-02T04:43:02Z
2023-12-31T16:38:59Z
[ "Template:Extrasolar planet counts", "Template:Convert", "Template:Cite encyclopedia", "Template:Refend", "Template:Citation needed", "Template:Better source needed", "Template:Val", "Template:Cite book", "Template:Wikiquote", "Template:Refbegin", "Template:About", "Template:Main", "Template:Reflist", "Template:Cbignore", "Template:Cite conference", "Template:Cite magazine", "Template:Cn", "Template:Cite press release", "Template:Wikisource portal", "Template:Astrobiology", "Template:Multiple image", "Template:Lang", "Template:Asof", "Template:Interstellar messages", "Template:Short description", "Template:As of", "Template:Cite news", "Template:Portal bar", "Template:Columns-list", "Template:Cite journal", "Template:Extraterrestrial life", "Template:Authority control", "Template:Use British English", "Template:Life in the Universe", "Template:Also", "Template:Commons category", "Template:Molecules detected in outer space", "Template:Use dmy dates", "Template:See also", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Extraterrestrial_life
9,589
European Strategic Programme on Research in Information Technology
European Strategic Programme on Research in Information Technology (ESPRIT) was a series of integrated programmes of information technology research and development projects and industrial technology transfer measures. It was a European Union initiative managed by the Directorate General for Industry (DG III) of the European Commission. Five ESPRIT programmes (ESPRIT 0 to ESPRIT 4) ran consecutively from 1983 to 1998. ESPRIT 4 was succeeded by the Information Society Technologies (IST) programme in 1999. Some of the projects and products supported by ESPRIT were:
[ { "paragraph_id": 0, "text": "European Strategic Programme on Research in Information Technology (ESPRIT) was a series of integrated programmes of information technology research and development projects and industrial technology transfer measures. It was a European Union initiative managed by the Directorate General for Industry (DG III) of the European Commission.", "title": "" }, { "paragraph_id": 1, "text": "Five ESPRIT programmes (ESPRIT 0 to ESPRIT 4) ran consecutively from 1983 to 1998. ESPRIT 4 was succeeded by the Information Society Technologies (IST) programme in 1999.", "title": "Programmes" }, { "paragraph_id": 2, "text": "Some of the projects and products supported by ESPRIT were:", "title": "Projects" } ]
European Strategic Programme on Research in Information Technology (ESPRIT) was a series of integrated programmes of information technology research and development projects and industrial technology transfer measures. It was a European Union initiative managed by the Directorate General for Industry of the European Commission.
2023-04-21T15:29:59Z
[ "Template:Short description", "Template:More citations needed", "Template:Expand list", "Template:Reflist", "Template:Cite web", "Template:Cite book" ]
https://en.wikipedia.org/wiki/European_Strategic_Programme_on_Research_in_Information_Technology
9,591
E. E. Cummings
Edward Estlin Cummings, who was also known as E. E. Cummings, e. e. Cummings, and e e Cummings (October 14, 1894 – September 3, 1962), was an American poet, painter, essayist, author, and playwright. He was an ambulance driver during World War I and was in an internment camp, which provided the basis for his novel The Enormous Room (1922). The following year he published his first collection of poetry, Tulips and Chimneys, which showed his early experiments with grammar and typography. He wrote four plays; HIM (1927) and Santa Claus: A Morality (1946) were most successful. He wrote EIMI (1933), a travelogue of the Soviet Union, and delivered the Charles Eliot Norton Lectures in poetry, published as i—six nonlectures (1953). Fairy Tales (1965), a collection of short stories, was published posthumously. Cummings wrote approximately 2,900 poems. He is often regarded as one of the most important American poets of the 20th century. He is associated with modernist free-form poetry, and much of his work uses idiosyncratic syntax and lower-case spellings for poetic expression. M. L. Rosenthal wrote that “The chief effect of Cummings’ jugglery with syntax, grammar, and diction was to blow open otherwise trite and bathetic motifs through a dynamic rediscovery of the energies sealed up in conventional usage.... He succeeded masterfully in splitting the atom of the cute commonplace.” For Norman Friedman, Cummings's inventions "are best understood as various ways of stripping the film of familiarity from language in order to strip the film of familiarity from the world. Transform the word, he seems to have felt, and you are on the way to transforming the world.” The poet Randall Jarrell said of Cummings, “No one else has ever made avant-garde, experimental poems so attractive to the general and the special reader.” James Dickey wrote, "I think that Cummings is a daringly original poet, with more vitality and more sheer, uncompromising talent than any other living American writer.” He acknowledged that while his poetry isn't perfect, he was “ashamed and even a little guilty in picking out flaws” in it, which he compared to noting “the aesthetic defects in a rose. It is better to say what must finally be said about Cummings: that he has helped to give life to the language.” Edward Estlin Cummings was born on October 14, 1894, in Cambridge, Massachusetts, to Edward Cummings and Rebecca Haswell (née Clarke), a well-known Unitarian couple in the city. His father was a professor at Harvard University who later became nationally known as the minister of South Congregational Church (Unitarian) in Boston, Massachusetts. His mother, who loved to spend time with her children, played games with Edward and his sister, Elizabeth. From an early age, Cummings' parents supported his creative gifts. Cummings wrote poems and drew as a child, and he often played outdoors with the many other children who lived in his neighborhood. He grew up in the company of such family friends as the philosophers William James and Josiah Royce. Many of Cummings' summers were spent on Silver Lake in Madison, New Hampshire, where his father had built two houses along the eastern shore. The family ultimately purchased the nearby Joy Farm where Cummings had his primary summer residence. He expressed transcendental leanings his entire life. As he matured, Cummings moved to an "I, Thou" relationship with God. His journals are replete with references to "le bon Dieu," as well as prayers for inspiration in his poetry and artwork (such as "Bon Dieu! may i some day do something truly great. amen."). Cummings "also prayed for strength to be his essential self ('may I be I is the only prayer—not may I be great or good or beautiful or wise or strong'), and for relief of spirit in times of depression ('almighty God! I thank thee for my soul; & may I never die spiritually into a mere mind through disease of loneliness')". Cummings wanted to be a poet from childhood and wrote poetry daily from age 8 to 22, exploring assorted forms. He studied Latin and Greek at Cambridge Latin High School. He attended Harvard University, graduating with a Bachelor of Arts degree magna cum laude and was elected to the Phi Beta Kappa society in 1915. The following year, he received a Master of Arts degree from the university. During his studies at Harvard, he developed an interest in modern poetry, which ignored conventional grammar and syntax and aimed for a dynamic use of language. His first published poems appeared in Eight Harvard Poets (1917). Upon graduating, he worked for a book dealer. In 1917, with the First World War going on in Europe, Cummings enlisted in the Norton-Harjes Ambulance Corps. On the boat to France, he met William Slater Brown and they quickly became friends. Due to an administrative error, Cummings and Brown did not receive an assignment for five weeks, a period they spent exploring Paris. Cummings fell in love with the city, to which he would return throughout his life. During their service in the ambulance corps, the two young writers sent letters home that drew the attention of the military censors. They were known to prefer the company of French soldiers over fellow ambulance drivers. The two openly expressed anti-war views; Cummings spoke of his lack of hatred for the Germans. On September 21, 1917, five months after starting his belated assignment, Cummings and William Slater Brown were arrested by the French military on suspicion of espionage and undesirable activities, they were held for three and a half months in a military detention camp at the Dépôt de Triage, in La Ferté-Macé, Orne, Normandy. They were imprisoned with other detainees in a large room. Cummings' father made strenuous efforts to obtain his son's release through diplomatic channels; although advised his son's release was approved, there were lengthy delays, with little explanation. In frustration, Cummings' father wrote a letter to President Woodrow Wilson in December 1917. Cummings was released on December 19, 1917, returning to his family in the U.S. by New Year's Day, 1918. Cummings, his father, and Brown's family continued to agitate for Brown's release. By mid-February, he, too, was America-bound. Cummings used his prison experience as the basis for his novel, The Enormous Room (1922), about which F. Scott Fitzgerald said, "Of all the work by young men who have sprung up since 1920 one book survives—The Enormous Room by E. E. Cummings ... Those few who cause books to live have not been able to endure the thought of its mortality." Later in 1918 he was drafted into the army. He served a training deployment in the 12th Division at Camp Devens, Massachusetts, until November 1918. Buffalo Bill's defunct who used to ride a watersmooth-silver stallion and break onetwothreefourfive pigeonsjustlikethat Jesus he was a handsome man and what i want to know is how do you like your blueeyed boy Mister Death "Buffalo Bill's" (1920) Cummings returned to Paris in 1921, and lived there for two years before returning to New York. His collection Tulips and Chimneys, was published in 1923, and his inventive use of grammar and syntax is evident. The book was heavily cut by his editor. XLI Poems was published in 1925. With these collections, Cummings made his reputation as an avant garde poet. During the rest of the 1920s and 1930s, Cummings returned to Paris a number of times, and traveled throughout Europe. In 1931 Cummings traveled to the Soviet Union, recounting his experiences in Eimi, published two years later. During these years Cummings also traveled to Northern Africa and Mexico, and he worked as an essayist and portrait artist for Vanity Fair magazine (1924–1927). In 1926, Cummings' parents were in a car crash; only his mother survived, although she was severely injured. Cummings later described the crash in the following passage from his i: six nonlectures series given at Harvard (as part of the Charles Eliot Norton Lectures) in 1952 and 1953: A locomotive cut the car in half, killing my father instantly. When two brakemen jumped from the halted train, they saw a woman standing – dazed but erect – beside a mangled machine; with blood spouting (as the older said to me) out of her head. One of her hands (the younger added) kept feeling her dress, as if trying to discover why it was wet. These men took my sixty-six-year old mother by the arms and tried to lead her toward a nearby farmhouse; but she threw them off, strode straight to my father's body, and directed a group of scared spectators to cover him. When this had been done (and only then) she let them lead her away. His father's death had a profound effect on Cummings, who entered a new period in his artistic life. He began to focus on more important aspects of life in his poetry. He started this new period by paying homage to his father in the poem "my father moved through dooms of love". In the 1930s, Samuel Aiwaz Jacobs was Cummings' publisher; he had started the Golden Eagle Press after working as a typographer and publisher. In 1952, his alma mater, Harvard University, awarded Cummings an honorary seat as a guest professor. The Charles Eliot Norton Lectures he gave in 1952 and 1955 were later collected as i: six nonlectures. i thank You God for most this amazing day: for the leaping greenly spirits of trees and a blue true dream of sky; and for everything which is natural which is infinite which is yes Cummings spent the last decade of his life traveling, fulfilling speaking engagements, and spending time at his summer home, Joy Farm, in Silver Lake, New Hampshire. He died of a stroke on September 3, 1962, at the age of 67 at Memorial Hospital in North Conway, New Hampshire. Cummings was buried at Forest Hills Cemetery in Boston, Massachusetts. At the time of his death, Cummings was recognized as the "second most widely read poet in the United States, after Robert Frost". Cummings' papers are held at the Houghton Library at Harvard University and the Harry Ransom Center at the University of Texas at Austin. Cummings was married briefly twice, first to Elaine Orr Thayer, then to Anne Minnerly Barton. His longest relationship lasted more than three decades with Marion Morehouse. In 2020, it was revealed that in 1917, before his first marriage, Cummings had shared several passionate love letters with a Parisian prostitute, Marie Louise Lallemand. Despite Cummings' efforts, he was unable to find Lallemand upon his return to Paris after the front. Cummings' first marriage, to Elaine Orr, began as a love affair in 1918 while she was still married to Scofield Thayer, one of Cummings' friends from Harvard. During this time, he wrote a good deal of his erotic poetry. The couple had a daughter while Orr was still married to Thayer; after Orr divorced Thayer, Cummings and Orr married on March 19, 1924. Thayer had been registered on the child's birth certificate as the father, but Cummings legally adopted her after his marriage to Orr. Although his relationship with Orr stretched back several years, the marriage was brief: the couple separated after two months of marriage and divorced less than nine months later. Cummings married his second wife Anne Minnerly Barton on May 1, 1929. They separated three years later in 1932. That same year, Minnerly obtained a Mexican divorce; it was not officially recognized in the United States until August 1934. Anne died in 1970 aged 72. In 1934, after his separation from his second wife, Cummings met Marion Morehouse, a fashion model and photographer. Although it is not clear whether the two were ever formally married, Morehouse lived with Cummings until his death in 1962. She died on May 18, 1969, while living at 4 Patchin Place, Greenwich Village, New York City, where Cummings had resided since September 1924. According to his testimony in EIMI, Cummings had little interest in politics until his trip to the Soviet Union in 1931. He subsequently shifted rightward on many political and social issues. Despite his radical and bohemian public image, he was a Republican and later an ardent supporter of Joseph McCarthy. As well as being influenced by notable modernists, including Gertrude Stein and Ezra Pound, Cummings was particularly drawn to early imagist experiments; later, his visits to Paris exposed him to Dada and Surrealism, which was reflected in his writing style. Cummings critic and biographer Norman Friedman remarks that in Cummings' later work the "shift from simile to symbol" created poetry that is "frequently more lucid, more moving, and more profound than his earlier". Despite Cummings' familiarity with avant-garde styles (likely affected by the calligrammes of French poet Apollinaire, according to a contemporary observation), much of his work is quite traditional. For example, many of his poems are sonnets, albeit described by Richard D. Cureton as "revisionary...with scrambled rhymes and rearranged, disproportioned structures; awkwardly unpredictable metrical variation; clashing, mawkish diction; complex, wandering syntax; etc." He occasionally drew from the blues form and used acrostics. Many of Cummings' poems are satirical and address social issues but have an equal or even stronger bias toward Romanticism: time and again his poems celebrate love, sex, and the season of rebirth. While his poetic forms and themes share an affinity with the Romantic tradition, critic Emily Essert asserts that Cummings' work is particularly modernist and frequently employs what linguist Irene Fairley calls "syntatic deviance". Some poems do not involve any typographical or punctuation innovations at all, but purely syntactic ones. i carry your heart with me(i carry it in my heart)i am never without it(anywhere i go you go,my dear;and whatever is done by only me is your doing,my darling) i fear no fate(for you are my fate,my sweet)i want no world(for beautiful you are my world,my true) and it's you are whatever a moon has always meant and whatever a sun will always sing is you here is the deepest secret nobody knows (here is the root of the root and the bud of the bud and the sky of the sky of a tree called life;which grows higher than soul can hope or mind can hide) and this is the wonder that's keeping the stars apart i carry your heart(i carry it in my heart) From "i carry your heart with me(i carry it in" (1952) While some of his poetry is free verse (and not beheld to rhyme or meter), Cureton has remarked that many of his sonnets follow an intricate rhyme scheme, and often employ pararhyme. A number of Cummings' poems feature his typographically exuberant style, with words, parts of words, or punctuation symbols scattered across the page, wherein Essert asserts "feeling is first" and the work begs to "be re-read in order to be understood"; Cummings, also a painter, created his texts not just as literature, but as "visual objects" on the page, and used typography to "paint a picture". The seeds of Cummings' unconventional style appear well established even in his earliest work. At age six, he wrote to his father: FATHER DEAR. BE, YOUR FATHER-GOOD AND GOOD, HE IS GOOD NOW, IT IS NOT GOOD TO SEE IT RAIN, FATHER DEAR IS, IT, DEAR, NO FATHER DEAR, LOVE, YOU DEAR, ESTLIN. Following his autobiographical novel, The Enormous Room, Cummings' first published work was a collection of poems titled Tulips and Chimneys (1923). This early work already displayed Cummings' characteristically eccentric use of grammar and punctuation, although a fair amount of the poems are written in conventional language. anyone lived in a pretty how town (with up so floating many bells down) spring summer autumn winter he sang his didn't he danced his did Women and men (both little and small) cared for anyone not at all they sowed their isn't they reaped their same sun moon stars rain From "anyone lived in a pretty how town" (1940) Cummings' works often do not follow the conventional rules that generate typical English sentences, or what Fairley identifies as "ungrammar". In addition, a number of Cummings' poems feature, in part or in whole, intentional misspellings, and several incorporate phonetic spellings intended to represent particular dialects. Cummings also employs what Fairley describes as "morphological innovation", wherein he frequently creates what critic Ian Landles calls: "unusual compounds suggestive of 'a child's language'" like "'mud-luscious' and 'puddle-wonderful'". Literary critic R.P. Blackmur has commented that this use of language is "frequently unintelligible because [Cummings] disregards the historical accumulation of meaning in words in favor of merely private and personal associations". Fellow poet Edna St. Vincent Millay, in her equivocal letter recommending Cummings for the Guggenheim Fellowship he was awarded in 1934, expressed her frustration at his opaque symbolism. "[I]f he prints and offers for sale poetry which he is quite content should be, after hours of sweating concentration, inexplicable from any point of view to a person as intelligent as myself, then he does so with a motive which is frivolous from the point of view of art, and should not be helped or encouraged by any serious person or group of persons ... there is fine writing and powerful writing (as well as some of the most pompous nonsense I ever let slip to the floor with a wide yawn) ... What I propose, then, is this: that you give Mr. Cummings enough rope. He may hang himself; or he may lasso a unicorn." Cummings also wrote children's books and novels. A notable example of his versatility is an introduction he wrote for a collection of the comic strip Krazy Kat. Cummings included ethnic slurs in his writing, which proved controversial. In his 1950 collection Xaipe: Seventy-One Poems, Cummings published two poems containing words that caused outrage in some quarters. Friedman considered these two poems to be "condensed" and "cryptic" parables, "sparsely told", in which setting the use of such "inflammatory material" was likely to meet with reader misapprehension. Poet William Carlos Williams spoke out in his defense. Cummings biographer Catherine Reef notes of the controversy: Friends begged Cummings to reconsider publishing these poems, and the book's editor pleaded with him to withdraw them, but he insisted that they stay. All the fuss perplexed him. The poems were commenting on prejudice, he pointed out, and not condoning it. He intended to show how derogatory words cause people to see others in terms of stereotypes rather than as individuals. "America (which turns Hungarian into 'hunky' & Irishman into 'mick' and Norwegian into 'square-head') is to blame for 'kike,'" he said. During his lifetime, Cummings published four plays. HIM, a three-act play, was first produced in 1928 by the Provincetown Players in New York City. The production was directed by James Light. The play's main characters are "Him", a playwright, portrayed by William Johnstone, and "Me", his girlfriend, portrayed by Erin O'Brien-Moore. Cummings said of the unorthodox play: Relax and give the play a chance to strut its stuff—relax, stop wondering what it is all 'about'—like many strange and familiar things, Life included, this play isn't 'about,' it simply is. ... Don't try to enjoy it, let it try to enjoy you. DON'T TRY TO UNDERSTAND IT, LET IT TRY TO UNDERSTAND YOU." Anthropos, or the Future of Art is a short, one-act play that Cummings contributed to the anthology Whither, Whither or After Sex, What? A Symposium to End Symposium. The play consists of dialogue between Man, the main character, and three "infrahumans", or inferior beings. The word anthropos is the Greek word for "man", in the sense of "mankind". Tom, A Ballet is a ballet based on Uncle Tom's Cabin. The ballet is detailed in a "synopsis" as well as descriptions of four "episodes", which were published by Cummings in 1935. It remained unperformed until 2015. Santa Claus: A Morality was probably Cummings' most successful play. It is an allegorical Christmas fantasy presented in one act of five scenes. The play was inspired by his daughter Nancy, with whom he was reunited in 1946. It was first published in the Harvard College magazine, Wake. The play's main characters are Santa Claus, his family (Woman and Child), Death, and Mob. At the outset of the play, Santa Claus's family has disintegrated due to their lust for knowledge (Science). After a series of events, however, Santa Claus's faith in love and his rejection of the materialism and disappointment he associates with Science are reaffirmed, and he is reunited with Woman and Child. Cummings was an avid painter, referring to writing and painting as his twin obsessions and to himself as a poetandpainter. He painted continuously, relentlessly, from childhood until his death, and left in his estate more than 1600 oils and watercolors (a figure that does not include the works he sold during his career) and over 9,000 drawings. In a self-interview from Foreword to an Exhibit: II (1945), the artist asked himself, Tell me, doesn’t your painting interfere with your writing? and answered, Quite the contrary: they love each other dearly. Cummings had more than 30 exhibits of his paintings in his lifetime. He received substantial acclaim as an American cubist and an abstract, avant garde painter between the World Wars, but with the publication of his books The Enormous Room and Tulips and Chimneys in the 1920s, his reputation as a poet eclipsed his success as a visual artist. In 1931, he published a limited edition volume of his artwork entitled CIOPW, named for his media of charcoal, ink, oil, pencil, and watercolor. About this same time, he began to break from Modernist aesthetics and employ a more subjective and spontaneous style; his work became more representational: landscapes, nudes, still lifes, and portraits. Cummings' publishers and others have often echoed the unconventional orthography in his poetry by writing his name in lower case. Cummings himself used both the lowercase and capitalized versions, though he most often signed his name with capitals. The use of lower case for his initials was popularized in part by the title of some books, particularly in the 1960s, printing his name in lower case on the cover and spine. In the preface to E. E. Cummings: The Growth of a Writer by Norman Friedman, critic Harry T. Moore notes Cummings "had his name put legally into lower case, and in his later books the titles and his name were always in lower case". According to Cummings' widow, however, this is incorrect. She wrote to Friedman: "You should not have allowed H. Moore to make such a stupid & childish statement about Cummings & his signature." On February 27, 1951, Cummings wrote to his French translator D. Jon Grossman that he preferred the use of upper case for the particular edition they were working on. One Cummings scholar believes that on the rare occasions that Cummings signed his name in all lower case, he may have intended it as a gesture of humility, not as an indication that it was the preferred orthography for others to use. Additionally, The Chicago Manual of Style, which prescribes favoring non-standard capitalization of names in accordance with the bearer's strongly stated preference, notes "E. E. Cummings can be safely capitalized; it was one of his publishers, not he himself, who lowercased his name." In 1943, modern dancer and choreographer, Jean Erdman presented "The Transformations of Medusa, Forever and Sunsmell" with a commissioned score by John Cage and a spoken text from the title poem by E. E. Cummings, sponsored by the Arts Club of Chicago. Erdman also choreographed "Twenty Poems" (1960), a cycle of E. E. Cummings' poems for eight dancers and one actor, with a commissioned score by Teiji Ito. It was performed in the round at the Circle in the Square Theatre in Greenwich Village. Numerous composers have set Cummings' poems to music: During his lifetime, Cummings received numerous awards in recognition of his work, including: Full text of poetry available at:
[ { "paragraph_id": 0, "text": "Edward Estlin Cummings, who was also known as E. E. Cummings, e. e. Cummings, and e e Cummings (October 14, 1894 – September 3, 1962), was an American poet, painter, essayist, author, and playwright. He was an ambulance driver during World War I and was in an internment camp, which provided the basis for his novel The Enormous Room (1922). The following year he published his first collection of poetry, Tulips and Chimneys, which showed his early experiments with grammar and typography. He wrote four plays; HIM (1927) and Santa Claus: A Morality (1946) were most successful. He wrote EIMI (1933), a travelogue of the Soviet Union, and delivered the Charles Eliot Norton Lectures in poetry, published as i—six nonlectures (1953). Fairy Tales (1965), a collection of short stories, was published posthumously.", "title": "" }, { "paragraph_id": 1, "text": "Cummings wrote approximately 2,900 poems. He is often regarded as one of the most important American poets of the 20th century. He is associated with modernist free-form poetry, and much of his work uses idiosyncratic syntax and lower-case spellings for poetic expression. M. L. Rosenthal wrote that “The chief effect of Cummings’ jugglery with syntax, grammar, and diction was to blow open otherwise trite and bathetic motifs through a dynamic rediscovery of the energies sealed up in conventional usage.... He succeeded masterfully in splitting the atom of the cute commonplace.” For Norman Friedman, Cummings's inventions \"are best understood as various ways of stripping the film of familiarity from language in order to strip the film of familiarity from the world. Transform the word, he seems to have felt, and you are on the way to transforming the world.”", "title": "" }, { "paragraph_id": 2, "text": "The poet Randall Jarrell said of Cummings, “No one else has ever made avant-garde, experimental poems so attractive to the general and the special reader.” James Dickey wrote, \"I think that Cummings is a daringly original poet, with more vitality and more sheer, uncompromising talent than any other living American writer.” He acknowledged that while his poetry isn't perfect, he was “ashamed and even a little guilty in picking out flaws” in it, which he compared to noting “the aesthetic defects in a rose. It is better to say what must finally be said about Cummings: that he has helped to give life to the language.”", "title": "" }, { "paragraph_id": 3, "text": "Edward Estlin Cummings was born on October 14, 1894, in Cambridge, Massachusetts, to Edward Cummings and Rebecca Haswell (née Clarke), a well-known Unitarian couple in the city. His father was a professor at Harvard University who later became nationally known as the minister of South Congregational Church (Unitarian) in Boston, Massachusetts. His mother, who loved to spend time with her children, played games with Edward and his sister, Elizabeth. From an early age, Cummings' parents supported his creative gifts. Cummings wrote poems and drew as a child, and he often played outdoors with the many other children who lived in his neighborhood. He grew up in the company of such family friends as the philosophers William James and Josiah Royce. Many of Cummings' summers were spent on Silver Lake in Madison, New Hampshire, where his father had built two houses along the eastern shore. The family ultimately purchased the nearby Joy Farm where Cummings had his primary summer residence.", "title": "Life" }, { "paragraph_id": 4, "text": "He expressed transcendental leanings his entire life. As he matured, Cummings moved to an \"I, Thou\" relationship with God. His journals are replete with references to \"le bon Dieu,\" as well as prayers for inspiration in his poetry and artwork (such as \"Bon Dieu! may i some day do something truly great. amen.\"). Cummings \"also prayed for strength to be his essential self ('may I be I is the only prayer—not may I be great or good or beautiful or wise or strong'), and for relief of spirit in times of depression ('almighty God! I thank thee for my soul; & may I never die spiritually into a mere mind through disease of loneliness')\".", "title": "Life" }, { "paragraph_id": 5, "text": "Cummings wanted to be a poet from childhood and wrote poetry daily from age 8 to 22, exploring assorted forms. He studied Latin and Greek at Cambridge Latin High School. He attended Harvard University, graduating with a Bachelor of Arts degree magna cum laude and was elected to the Phi Beta Kappa society in 1915. The following year, he received a Master of Arts degree from the university. During his studies at Harvard, he developed an interest in modern poetry, which ignored conventional grammar and syntax and aimed for a dynamic use of language. His first published poems appeared in Eight Harvard Poets (1917). Upon graduating, he worked for a book dealer.", "title": "Life" }, { "paragraph_id": 6, "text": "In 1917, with the First World War going on in Europe, Cummings enlisted in the Norton-Harjes Ambulance Corps. On the boat to France, he met William Slater Brown and they quickly became friends. Due to an administrative error, Cummings and Brown did not receive an assignment for five weeks, a period they spent exploring Paris. Cummings fell in love with the city, to which he would return throughout his life.", "title": "Life" }, { "paragraph_id": 7, "text": "During their service in the ambulance corps, the two young writers sent letters home that drew the attention of the military censors. They were known to prefer the company of French soldiers over fellow ambulance drivers. The two openly expressed anti-war views; Cummings spoke of his lack of hatred for the Germans. On September 21, 1917, five months after starting his belated assignment, Cummings and William Slater Brown were arrested by the French military on suspicion of espionage and undesirable activities, they were held for three and a half months in a military detention camp at the Dépôt de Triage, in La Ferté-Macé, Orne, Normandy.", "title": "Life" }, { "paragraph_id": 8, "text": "They were imprisoned with other detainees in a large room. Cummings' father made strenuous efforts to obtain his son's release through diplomatic channels; although advised his son's release was approved, there were lengthy delays, with little explanation. In frustration, Cummings' father wrote a letter to President Woodrow Wilson in December 1917. Cummings was released on December 19, 1917, returning to his family in the U.S. by New Year's Day, 1918. Cummings, his father, and Brown's family continued to agitate for Brown's release. By mid-February, he, too, was America-bound. Cummings used his prison experience as the basis for his novel, The Enormous Room (1922), about which F. Scott Fitzgerald said, \"Of all the work by young men who have sprung up since 1920 one book survives—The Enormous Room by E. E. Cummings ... Those few who cause books to live have not been able to endure the thought of its mortality.\" Later in 1918 he was drafted into the army. He served a training deployment in the 12th Division at Camp Devens, Massachusetts, until November 1918.", "title": "Life" }, { "paragraph_id": 9, "text": "Buffalo Bill's defunct who used to ride a watersmooth-silver stallion and break onetwothreefourfive pigeonsjustlikethat Jesus he was a handsome man and what i want to know is how do you like your blueeyed boy Mister Death", "title": "Life" }, { "paragraph_id": 10, "text": "\"Buffalo Bill's\" (1920)", "title": "Life" }, { "paragraph_id": 11, "text": "Cummings returned to Paris in 1921, and lived there for two years before returning to New York. His collection Tulips and Chimneys, was published in 1923, and his inventive use of grammar and syntax is evident. The book was heavily cut by his editor. XLI Poems was published in 1925. With these collections, Cummings made his reputation as an avant garde poet.", "title": "Life" }, { "paragraph_id": 12, "text": "During the rest of the 1920s and 1930s, Cummings returned to Paris a number of times, and traveled throughout Europe. In 1931 Cummings traveled to the Soviet Union, recounting his experiences in Eimi, published two years later. During these years Cummings also traveled to Northern Africa and Mexico, and he worked as an essayist and portrait artist for Vanity Fair magazine (1924–1927).", "title": "Life" }, { "paragraph_id": 13, "text": "In 1926, Cummings' parents were in a car crash; only his mother survived, although she was severely injured. Cummings later described the crash in the following passage from his i: six nonlectures series given at Harvard (as part of the Charles Eliot Norton Lectures) in 1952 and 1953:", "title": "Life" }, { "paragraph_id": 14, "text": "A locomotive cut the car in half, killing my father instantly. When two brakemen jumped from the halted train, they saw a woman standing – dazed but erect – beside a mangled machine; with blood spouting (as the older said to me) out of her head. One of her hands (the younger added) kept feeling her dress, as if trying to discover why it was wet. These men took my sixty-six-year old mother by the arms and tried to lead her toward a nearby farmhouse; but she threw them off, strode straight to my father's body, and directed a group of scared spectators to cover him. When this had been done (and only then) she let them lead her away.", "title": "Life" }, { "paragraph_id": 15, "text": "His father's death had a profound effect on Cummings, who entered a new period in his artistic life. He began to focus on more important aspects of life in his poetry. He started this new period by paying homage to his father in the poem \"my father moved through dooms of love\".", "title": "Life" }, { "paragraph_id": 16, "text": "In the 1930s, Samuel Aiwaz Jacobs was Cummings' publisher; he had started the Golden Eagle Press after working as a typographer and publisher.", "title": "Life" }, { "paragraph_id": 17, "text": "In 1952, his alma mater, Harvard University, awarded Cummings an honorary seat as a guest professor. The Charles Eliot Norton Lectures he gave in 1952 and 1955 were later collected as i: six nonlectures.", "title": "Life" }, { "paragraph_id": 18, "text": "i thank You God for most this amazing day: for the leaping greenly spirits of trees and a blue true dream of sky; and for everything which is natural which is infinite which is yes", "title": "Life" }, { "paragraph_id": 19, "text": "Cummings spent the last decade of his life traveling, fulfilling speaking engagements, and spending time at his summer home, Joy Farm, in Silver Lake, New Hampshire. He died of a stroke on September 3, 1962, at the age of 67 at Memorial Hospital in North Conway, New Hampshire. Cummings was buried at Forest Hills Cemetery in Boston, Massachusetts. At the time of his death, Cummings was recognized as the \"second most widely read poet in the United States, after Robert Frost\".", "title": "Life" }, { "paragraph_id": 20, "text": "Cummings' papers are held at the Houghton Library at Harvard University and the Harry Ransom Center at the University of Texas at Austin.", "title": "Life" }, { "paragraph_id": 21, "text": "Cummings was married briefly twice, first to Elaine Orr Thayer, then to Anne Minnerly Barton. His longest relationship lasted more than three decades with Marion Morehouse.", "title": "Personal life" }, { "paragraph_id": 22, "text": "In 2020, it was revealed that in 1917, before his first marriage, Cummings had shared several passionate love letters with a Parisian prostitute, Marie Louise Lallemand. Despite Cummings' efforts, he was unable to find Lallemand upon his return to Paris after the front.", "title": "Personal life" }, { "paragraph_id": 23, "text": "Cummings' first marriage, to Elaine Orr, began as a love affair in 1918 while she was still married to Scofield Thayer, one of Cummings' friends from Harvard. During this time, he wrote a good deal of his erotic poetry. The couple had a daughter while Orr was still married to Thayer; after Orr divorced Thayer, Cummings and Orr married on March 19, 1924. Thayer had been registered on the child's birth certificate as the father, but Cummings legally adopted her after his marriage to Orr. Although his relationship with Orr stretched back several years, the marriage was brief: the couple separated after two months of marriage and divorced less than nine months later.", "title": "Personal life" }, { "paragraph_id": 24, "text": "Cummings married his second wife Anne Minnerly Barton on May 1, 1929. They separated three years later in 1932. That same year, Minnerly obtained a Mexican divorce; it was not officially recognized in the United States until August 1934. Anne died in 1970 aged 72.", "title": "Personal life" }, { "paragraph_id": 25, "text": "In 1934, after his separation from his second wife, Cummings met Marion Morehouse, a fashion model and photographer. Although it is not clear whether the two were ever formally married, Morehouse lived with Cummings until his death in 1962. She died on May 18, 1969, while living at 4 Patchin Place, Greenwich Village, New York City, where Cummings had resided since September 1924.", "title": "Personal life" }, { "paragraph_id": 26, "text": "According to his testimony in EIMI, Cummings had little interest in politics until his trip to the Soviet Union in 1931. He subsequently shifted rightward on many political and social issues. Despite his radical and bohemian public image, he was a Republican and later an ardent supporter of Joseph McCarthy.", "title": "Personal life" }, { "paragraph_id": 27, "text": "As well as being influenced by notable modernists, including Gertrude Stein and Ezra Pound, Cummings was particularly drawn to early imagist experiments; later, his visits to Paris exposed him to Dada and Surrealism, which was reflected in his writing style. Cummings critic and biographer Norman Friedman remarks that in Cummings' later work the \"shift from simile to symbol\" created poetry that is \"frequently more lucid, more moving, and more profound than his earlier\".", "title": "Works and style" }, { "paragraph_id": 28, "text": "Despite Cummings' familiarity with avant-garde styles (likely affected by the calligrammes of French poet Apollinaire, according to a contemporary observation), much of his work is quite traditional. For example, many of his poems are sonnets, albeit described by Richard D. Cureton as \"revisionary...with scrambled rhymes and rearranged, disproportioned structures; awkwardly unpredictable metrical variation; clashing, mawkish diction; complex, wandering syntax; etc.\" He occasionally drew from the blues form and used acrostics. Many of Cummings' poems are satirical and address social issues but have an equal or even stronger bias toward Romanticism: time and again his poems celebrate love, sex, and the season of rebirth.", "title": "Works and style" }, { "paragraph_id": 29, "text": "While his poetic forms and themes share an affinity with the Romantic tradition, critic Emily Essert asserts that Cummings' work is particularly modernist and frequently employs what linguist Irene Fairley calls \"syntatic deviance\". Some poems do not involve any typographical or punctuation innovations at all, but purely syntactic ones.", "title": "Works and style" }, { "paragraph_id": 30, "text": "i carry your heart with me(i carry it in my heart)i am never without it(anywhere i go you go,my dear;and whatever is done by only me is your doing,my darling) i fear no fate(for you are my fate,my sweet)i want no world(for beautiful you are my world,my true) and it's you are whatever a moon has always meant and whatever a sun will always sing is you here is the deepest secret nobody knows (here is the root of the root and the bud of the bud and the sky of the sky of a tree called life;which grows higher than soul can hope or mind can hide) and this is the wonder that's keeping the stars apart i carry your heart(i carry it in my heart)", "title": "Works and style" }, { "paragraph_id": 31, "text": "From \"i carry your heart with me(i carry it in\" (1952)", "title": "Works and style" }, { "paragraph_id": 32, "text": "While some of his poetry is free verse (and not beheld to rhyme or meter), Cureton has remarked that many of his sonnets follow an intricate rhyme scheme, and often employ pararhyme. A number of Cummings' poems feature his typographically exuberant style, with words, parts of words, or punctuation symbols scattered across the page, wherein Essert asserts \"feeling is first\" and the work begs to \"be re-read in order to be understood\"; Cummings, also a painter, created his texts not just as literature, but as \"visual objects\" on the page, and used typography to \"paint a picture\".", "title": "Works and style" }, { "paragraph_id": 33, "text": "The seeds of Cummings' unconventional style appear well established even in his earliest work. At age six, he wrote to his father:", "title": "Works and style" }, { "paragraph_id": 34, "text": "FATHER DEAR. BE, YOUR FATHER-GOOD AND GOOD, HE IS GOOD NOW, IT IS NOT GOOD TO SEE IT RAIN, FATHER DEAR IS, IT, DEAR, NO FATHER DEAR, LOVE, YOU DEAR, ESTLIN.", "title": "Works and style" }, { "paragraph_id": 35, "text": "Following his autobiographical novel, The Enormous Room, Cummings' first published work was a collection of poems titled Tulips and Chimneys (1923). This early work already displayed Cummings' characteristically eccentric use of grammar and punctuation, although a fair amount of the poems are written in conventional language.", "title": "Works and style" }, { "paragraph_id": 36, "text": "anyone lived in a pretty how town (with up so floating many bells down) spring summer autumn winter he sang his didn't he danced his did Women and men (both little and small) cared for anyone not at all they sowed their isn't they reaped their same sun moon stars rain", "title": "Works and style" }, { "paragraph_id": 37, "text": "From \"anyone lived in a pretty how town\" (1940)", "title": "Works and style" }, { "paragraph_id": 38, "text": "Cummings' works often do not follow the conventional rules that generate typical English sentences, or what Fairley identifies as \"ungrammar\". In addition, a number of Cummings' poems feature, in part or in whole, intentional misspellings, and several incorporate phonetic spellings intended to represent particular dialects. Cummings also employs what Fairley describes as \"morphological innovation\", wherein he frequently creates what critic Ian Landles calls: \"unusual compounds suggestive of 'a child's language'\" like \"'mud-luscious' and 'puddle-wonderful'\". Literary critic R.P. Blackmur has commented that this use of language is \"frequently unintelligible because [Cummings] disregards the historical accumulation of meaning in words in favor of merely private and personal associations\".", "title": "Works and style" }, { "paragraph_id": 39, "text": "Fellow poet Edna St. Vincent Millay, in her equivocal letter recommending Cummings for the Guggenheim Fellowship he was awarded in 1934, expressed her frustration at his opaque symbolism. \"[I]f he prints and offers for sale poetry which he is quite content should be, after hours of sweating concentration, inexplicable from any point of view to a person as intelligent as myself, then he does so with a motive which is frivolous from the point of view of art, and should not be helped or encouraged by any serious person or group of persons ... there is fine writing and powerful writing (as well as some of the most pompous nonsense I ever let slip to the floor with a wide yawn) ... What I propose, then, is this: that you give Mr. Cummings enough rope. He may hang himself; or he may lasso a unicorn.\"", "title": "Works and style" }, { "paragraph_id": 40, "text": "Cummings also wrote children's books and novels. A notable example of his versatility is an introduction he wrote for a collection of the comic strip Krazy Kat.", "title": "Works and style" }, { "paragraph_id": 41, "text": "Cummings included ethnic slurs in his writing, which proved controversial. In his 1950 collection Xaipe: Seventy-One Poems, Cummings published two poems containing words that caused outrage in some quarters. Friedman considered these two poems to be \"condensed\" and \"cryptic\" parables, \"sparsely told\", in which setting the use of such \"inflammatory material\" was likely to meet with reader misapprehension. Poet William Carlos Williams spoke out in his defense.", "title": "Works and style" }, { "paragraph_id": 42, "text": "Cummings biographer Catherine Reef notes of the controversy:", "title": "Works and style" }, { "paragraph_id": 43, "text": "Friends begged Cummings to reconsider publishing these poems, and the book's editor pleaded with him to withdraw them, but he insisted that they stay. All the fuss perplexed him. The poems were commenting on prejudice, he pointed out, and not condoning it. He intended to show how derogatory words cause people to see others in terms of stereotypes rather than as individuals. \"America (which turns Hungarian into 'hunky' & Irishman into 'mick' and Norwegian into 'square-head') is to blame for 'kike,'\" he said.", "title": "Works and style" }, { "paragraph_id": 44, "text": "During his lifetime, Cummings published four plays. HIM, a three-act play, was first produced in 1928 by the Provincetown Players in New York City. The production was directed by James Light. The play's main characters are \"Him\", a playwright, portrayed by William Johnstone, and \"Me\", his girlfriend, portrayed by Erin O'Brien-Moore.", "title": "Works and style" }, { "paragraph_id": 45, "text": "Cummings said of the unorthodox play:", "title": "Works and style" }, { "paragraph_id": 46, "text": "Relax and give the play a chance to strut its stuff—relax, stop wondering what it is all 'about'—like many strange and familiar things, Life included, this play isn't 'about,' it simply is. ... Don't try to enjoy it, let it try to enjoy you. DON'T TRY TO UNDERSTAND IT, LET IT TRY TO UNDERSTAND YOU.\"", "title": "Works and style" }, { "paragraph_id": 47, "text": "Anthropos, or the Future of Art is a short, one-act play that Cummings contributed to the anthology Whither, Whither or After Sex, What? A Symposium to End Symposium. The play consists of dialogue between Man, the main character, and three \"infrahumans\", or inferior beings. The word anthropos is the Greek word for \"man\", in the sense of \"mankind\".", "title": "Works and style" }, { "paragraph_id": 48, "text": "Tom, A Ballet is a ballet based on Uncle Tom's Cabin. The ballet is detailed in a \"synopsis\" as well as descriptions of four \"episodes\", which were published by Cummings in 1935. It remained unperformed until 2015.", "title": "Works and style" }, { "paragraph_id": 49, "text": "Santa Claus: A Morality was probably Cummings' most successful play. It is an allegorical Christmas fantasy presented in one act of five scenes. The play was inspired by his daughter Nancy, with whom he was reunited in 1946. It was first published in the Harvard College magazine, Wake. The play's main characters are Santa Claus, his family (Woman and Child), Death, and Mob. At the outset of the play, Santa Claus's family has disintegrated due to their lust for knowledge (Science). After a series of events, however, Santa Claus's faith in love and his rejection of the materialism and disappointment he associates with Science are reaffirmed, and he is reunited with Woman and Child.", "title": "Works and style" }, { "paragraph_id": 50, "text": "Cummings was an avid painter, referring to writing and painting as his twin obsessions and to himself as a poetandpainter. He painted continuously, relentlessly, from childhood until his death, and left in his estate more than 1600 oils and watercolors (a figure that does not include the works he sold during his career) and over 9,000 drawings. In a self-interview from Foreword to an Exhibit: II (1945), the artist asked himself, Tell me, doesn’t your painting interfere with your writing? and answered, Quite the contrary: they love each other dearly.", "title": "Works and style" }, { "paragraph_id": 51, "text": "Cummings had more than 30 exhibits of his paintings in his lifetime. He received substantial acclaim as an American cubist and an abstract, avant garde painter between the World Wars, but with the publication of his books The Enormous Room and Tulips and Chimneys in the 1920s, his reputation as a poet eclipsed his success as a visual artist. In 1931, he published a limited edition volume of his artwork entitled CIOPW, named for his media of charcoal, ink, oil, pencil, and watercolor. About this same time, he began to break from Modernist aesthetics and employ a more subjective and spontaneous style; his work became more representational: landscapes, nudes, still lifes, and portraits.", "title": "Works and style" }, { "paragraph_id": 52, "text": "Cummings' publishers and others have often echoed the unconventional orthography in his poetry by writing his name in lower case. Cummings himself used both the lowercase and capitalized versions, though he most often signed his name with capitals.", "title": "Works and style" }, { "paragraph_id": 53, "text": "The use of lower case for his initials was popularized in part by the title of some books, particularly in the 1960s, printing his name in lower case on the cover and spine. In the preface to E. E. Cummings: The Growth of a Writer by Norman Friedman, critic Harry T. Moore notes Cummings \"had his name put legally into lower case, and in his later books the titles and his name were always in lower case\". According to Cummings' widow, however, this is incorrect. She wrote to Friedman: \"You should not have allowed H. Moore to make such a stupid & childish statement about Cummings & his signature.\" On February 27, 1951, Cummings wrote to his French translator D. Jon Grossman that he preferred the use of upper case for the particular edition they were working on. One Cummings scholar believes that on the rare occasions that Cummings signed his name in all lower case, he may have intended it as a gesture of humility, not as an indication that it was the preferred orthography for others to use. Additionally, The Chicago Manual of Style, which prescribes favoring non-standard capitalization of names in accordance with the bearer's strongly stated preference, notes \"E. E. Cummings can be safely capitalized; it was one of his publishers, not he himself, who lowercased his name.\"", "title": "Works and style" }, { "paragraph_id": 54, "text": "In 1943, modern dancer and choreographer, Jean Erdman presented \"The Transformations of Medusa, Forever and Sunsmell\" with a commissioned score by John Cage and a spoken text from the title poem by E. E. Cummings, sponsored by the Arts Club of Chicago. Erdman also choreographed \"Twenty Poems\" (1960), a cycle of E. E. Cummings' poems for eight dancers and one actor, with a commissioned score by Teiji Ito. It was performed in the round at the Circle in the Square Theatre in Greenwich Village.", "title": "Adaptations" }, { "paragraph_id": 55, "text": "Numerous composers have set Cummings' poems to music:", "title": "Adaptations" }, { "paragraph_id": 56, "text": "During his lifetime, Cummings received numerous awards in recognition of his work, including:", "title": "Awards" }, { "paragraph_id": 57, "text": "Full text of poetry available at:", "title": "References" } ]
Edward Estlin Cummings, who was also known as E. E. Cummings, e. e. Cummings, and e e Cummings, was an American poet, painter, essayist, author, and playwright. He was an ambulance driver during World War I and was in an internment camp, which provided the basis for his novel The Enormous Room (1922). The following year he published his first collection of poetry, Tulips and Chimneys, which showed his early experiments with grammar and typography. He wrote four plays; HIM (1927) and Santa Claus: A Morality (1946) were most successful. He wrote EIMI (1933), a travelogue of the Soviet Union, and delivered the Charles Eliot Norton Lectures in poetry, published as i—six nonlectures (1953). Fairy Tales (1965), a collection of short stories, was published posthumously. Cummings wrote approximately 2,900 poems. He is often regarded as one of the most important American poets of the 20th century. He is associated with modernist free-form poetry, and much of his work uses idiosyncratic syntax and lower-case spellings for poetic expression. M. L. Rosenthal wrote that “The chief effect of Cummings’ jugglery with syntax, grammar, and diction was to blow open otherwise trite and bathetic motifs through a dynamic rediscovery of the energies sealed up in conventional usage.... He succeeded masterfully in splitting the atom of the cute commonplace.” For Norman Friedman, Cummings's inventions "are best understood as various ways of stripping the film of familiarity from language in order to strip the film of familiarity from the world. Transform the word, he seems to have felt, and you are on the way to transforming the world.” The poet Randall Jarrell said of Cummings, “No one else has ever made avant-garde, experimental poems so attractive to the general and the special reader.” James Dickey wrote, "I think that Cummings is a daringly original poet, with more vitality and more sheer, uncompromising talent than any other living American writer.” He acknowledged that while his poetry isn't perfect, he was “ashamed and even a little guilty in picking out flaws” in it, which he compared to noting “the aesthetic defects in a rose. It is better to say what must finally be said about Cummings: that he has helped to give life to the language.”
2001-07-26T10:27:59Z
2023-12-23T19:42:43Z
[ "Template:Lang", "Template:Hlist", "Template:Cite web", "Template:Sister project links", "Template:Infobox author", "Template:Blockquote", "Template:Citation", "Template:Cite news", "Template:Verse translation", "Template:Internet Archive author", "Template:StandardEbooks", "Template:Authority control", "Template:Nee", "Template:Sfnp", "Template:Poemquote", "Template:ISBN", "Template:Refend", "Template:E. E. Cummings", "Template:Portal bar", "Template:Short description", "Template:For", "Template:Quote box", "Template:Notelist", "Template:Reflist", "Template:Multiple issues", "Template:Nbsp", "Template:Webarchive", "Template:Cite journal", "Template:Refbegin", "Template:Librivox author", "Template:Use mdy dates", "Template:Efn", "Template:Cite book", "Template:IRCAM work", "Template:Gutenberg author" ]
https://en.wikipedia.org/wiki/E._E._Cummings
9,592
East River
The East River is a saltwater tidal estuary in New York City. The waterway, which is actually not a river despite its name, connects Upper New York Bay on its south end to Long Island Sound on its north end. It separates Long Island, with the boroughs of Brooklyn and Queens, from Manhattan Island and from the Bronx (on the North American mainland). Because of its connection to Long Island Sound, it was once also known as the Sound River. The tidal strait changes its direction of flow regularly, and is subject to strong fluctuations in its current, which are accentuated by its narrowness and variety of depths. The waterway is navigable for its entire length of 16 miles (26 km), and was historically the center of maritime activities in the city. Technically a drowned valley, like the other waterways around New York City, the strait was formed approximately 11,000 years ago at the end of the Wisconsin glaciation. The distinct change in the shape of the strait between the lower and upper portions is evidence of this glacial activity. The upper portion (from Long Island Sound to Hell Gate), running largely perpendicular to the glacial motion, is wide, meandering, and has deep narrow bays on both banks, scoured out by the glacier's movement. The lower portion (from Hell Gate to New York Bay) runs north–south, parallel to the glacial motion. It is much narrower, with straight banks. The bays that exist, as well as those that used to exist before being filled in by human activity, are largely wide and shallow. The section known as "Hell Gate" – from the Dutch name Hellegat meaning either "bright strait" or "clear opening", given to the entire river in 1614 by explorer Adriaen Block when he passed through it in his ship Tyger – is a narrow, turbulent, and particularly treacherous stretch of the river. Tides from the Long Island Sound, New York Harbor and the Harlem River meet there, making it difficult to navigate, especially because of the number of rocky islets which once dotted it, with names such as "Frying Pan", "Pot, Bread and Cheese", "Hen and Chicken", "Heel Top"; "Flood"; and "Gridiron", roughly 12 islets and reefs in all, all of which led to a number of shipwrecks, including HMS Hussar, a British frigate that sank in 1780 while supposedly carrying gold and silver intended to pay British troops. The stretch has since been cleared of rocks and widened. Washington Irving wrote of Hell Gate that the current sounded "like a bull bellowing for more drink" at half tide, while at full tide it slept "as soundly as an alderman after dinner". He said it was like "a peaceable fellow enough when he has no liquor at all, or when he has a skinful, but who, when half-seas over, plays the very devil." The tidal regime is complex, with the two major tides – from the Long Island Sound and from the Atlantic Ocean – separated by about two hours; and this is without consideration of the tidal influence of the Harlem River, all of which creates a "dangerous cataract", as one ship's captain put it. The river is navigable for its entire length of 16 miles (26 km). In 1939 it was reported that the stretch from The Battery to the former Brooklyn Navy Yard near Wallabout Bay, a run of about 1,000 yards (910 m), was 40 feet (12 m) deep, the long section from there, running to the west of Roosevelt Island, through Hell Gate and to Throg's Neck was at least 35 feet (11 m) deep, and then eastward from there the river was, at mean low tide, 168 feet (51 m) deep. The broadness of the river's channel south of Roosevelt Island is caused by the dipping of the hardy Fordham gneiss underlying the island under the less strong Inwood marble which lies under the river bed. Why the river turns to the east as it approaches the three lower Manhattan bridges is geologically unknown. Roosevelt Island, a long (2-mile (3.2 km)) and narrow (800 feet (240 m)) landmass, lies in the stretch of the river between Manhattan Island and the borough of Queens roughly paralleling Manhattan's East 46th–86th Streets. The abrupt termination of the island on its north end is due to an extension of the 125th Street Fault. Politically, the island's 147 acres (0.59 km) constitute part of the borough of Manhattan. It is connected to Queens by the Roosevelt Island Bridge, to Manhattan by the Roosevelt Island Tramway, and to both boroughs by a subway station served by the F train. The Queensboro Bridge also runs across Roosevelt Island, and an elevator allowing both pedestrian and vehicular access to the island was added to the bridge in 1930, but elevator service was discontinued in 1955 following the opening of the Roosevelt Island Bridge, and the elevator was demolished in 1970. The island, which was formerly known as Blackwell's Island and Welfare Island before being renamed in honor of US President Franklin D. Roosevelt, historically served as the site of a penitentiary and a number of hospitals; today, it is dominated by residential neighborhoods consisting of large apartment buildings and parkland (much of which is dotted with the ruins of older structures). The largest land mass in the River south of Roosevelt Island is U Thant Island, an artificial islet created during the construction of the Steinway Tunnel (which currently serves the subway's 7 and <7> lines). Officially named Belmont Island after one of the tunnel's financiers, the landmass owes its popular name (after Burmese diplomat U Thant, former Secretary-General of the United Nations) to the efforts of a group associated with the guru Sri Chinmoy that held mediation meetings on the island in the 1970s. Today, the island is owned by New York State and serves as a migratory bird sanctuary that is closed to visitors. Proceeding north and east from Roosevelt Island, the River's principal islands include Manhattan's Mill Rock, an 8.6-acre (3.5 ha) island located about 1000 feet from Manhattan's East 96th Street; Manhattan's 520-acre Randalls and Wards Islands, two formerly separate islands joined by landfill that are home to a large public park, a number of public institutions, and the supports for the Triborough and the Hell Gate Bridges; the Bronx's Rikers Island, once under 100 acres (0.40 km) but now over 400 acres (1.6 km) following extensive landfill expansion after the island's 1884 purchase by the city as a prison farm and still home to New York City's massive and controversial primary jail complex; and North and South Brother Islands, both of which also constitute part of the Bronx. The Bronx River, Pugsley Creek, and Westchester Creek drain into the northern bank of the East River in the northern section of the strait. The Flushing River, historically known as Flushing Creek, empties into the strait's southern bank near LaGuardia Airport via Flushing Bay. Further west, Luyster Creek drains into the East River in Astoria, Queens. North of Randalls Island, it is joined by the Bronx Kill. Along the east of Wards Island, at approximately the strait's midpoint, it narrows into a channel called Hell Gate, which is spanned by both the Robert F. Kennedy Bridge (formerly the Triborough), and the Hell Gate Bridge. On the south side of Wards Island, it is joined by the Harlem River. Newtown Creek on Long Island, which itself contained several tributaries, drains into the East River and forms part of the boundary between Queens and Brooklyn. Bushwick Inlet and Wallabout Bay on Long Island also drain into the strait on the Long Island side. The Gowanus Canal was built from Gowanus Creek, which emptied into the river. Historically, there were other small streams which emptied into the river, though these and their associated wetlands have been filled in and built over. These small streams included the Harlem Creek, one of the most significant tributaries originating in Manhattan. Other streams that emptied into the East River included the Sawkill in Manhattan, Mill Brook in the Bronx, and Sunswick Creek in Queens. Prior to the arrival of Europeans, the land north of the East River was occupied by the Siwanoys, one of many groups of Algonquin-speaking Lenapes in the area. Those of the Lenapes who lived in the northern part of Manhattan Island in a campsite known as Konaande Kongh used a landing at around the current location of East 119th street to paddle into the river in canoes fashioned from tree trunks in order to fish. Dutch settlement of what became New Amsterdam began in 1623. Some of the earliest of the small settlements in the area were along the west bank of the East River on sites that had previously been Native American settlements. As with the Native Americans, the river was central to their lives for transportation for trading and for fishing. They gathered marsh grass to feed their cattle, and the East River's tides helped to power mills which ground grain to flour. By 1642 there was a ferry running on the river between Manhattan island and what is now Brooklyn, and the first pier on the river was built in 1647 at Pearl and Broad Streets. After the British took over the colony in 1664, which was renamed "New York", the development of the waterfront continued, and a shipbuilding industry grew up once New York started exporting flour. By the end of the 17th century, the Great Dock, located at Corlear's Hook on the East River, had been built. Historically, the lower portion of the strait, which separates Manhattan from Brooklyn, was one of the busiest and most important channels in the world, particularly during the first three centuries of New York City's history. Because the water along the lower Manhattan shoreline was too shallow for large boats to tie up and unload their goods, from 1686 on – after the signing of the Dongan Charter, which allowed intertidal land to be owned and sold – the shoreline was "wharfed out" to the high-water mark by constructing retaining walls that were filled in with every conceivable kind of landfill: excrement, dead animals, ships deliberately sunk in place, ship ballast, and muck dredged from the bottom of the river. On the new land were built warehouses and other structures necessary for the burgeoning sea trade. Many of the "water-lot" grants went to the rich and powerful families of the merchant class, although some went to tradesmen. By 1700, the Manhattan bank of the river had been "wharfed-out" up to around Whitehall Street, narrowing the strait of the river. After the signing of the Montgomerie Charter in the late 1720s, another 127 acres of land along the Manhattan shore of the East River was authorized to be filled-in, this time to a point 400 feet beyond the low-water mark; the parts that had already been expanded to the low water mark – much of which had been devastated by a coastal storm in the early 1720s and a nor'easter in 1723 – were also expanded, narrowing the channel even further. What had been quiet beach land was to become new streets and buildings, and the core of the city's sea-borne trade. This infilling went as far north as Corlear's Hook. In addition, the city was given control of the western shore of the river from Wallabout Bay south. Expansion of the waterfront halted during the American Revolution, in which the East River played an important role early in the conflict. On August 28, 1776, while British and Hessian troops rested after besting the Americans at the Battle of Long Island, General George Washington was rounding up all the boats on the east shore of the river, in what is now Brooklyn, and used them to successfully move his troops across the river – under cover of night, rain, and fog – to Manhattan island, before the British could press their advantage. Thus, though the battle was a victory for the British, the failure of Sir William Howe to destroy the Continental Army when he had the opportunity allowed the Americans to continue fighting. Without the stealthy withdrawal across the East River, the American Revolution might have ended much earlier. Wallabout Bay on the River was the site of most of the British prison ships – most notoriously HMS Jersey – where thousands of American prisoners of war were held in terrible conditions. These prisoners had come into the hands of the British after the fall of New York City on September 15, 1776, after the American loss at the Battle of Long Island and the loss of Fort Washington on November 16. Prisoners began to be housed on the broken-down warships and transports in December; about 24 ships were used in total, but generally only 5 or 6 at a time. Almost twice as many Americans died from neglect in these ships than did from all the battles in the war: as many as 12,000 soldiers, sailors and civilians. The bodies were thrown overboard or were buried in shallow graves on the riverbanks, but their bones – some of which were collected when they washed ashore – were later relocated and are now inside the Prison Ship Martyrs' Monument in nearby Fort Greene Park. The existence of the ships and the conditions the men were held in was widely known at the time through letters, diaries and memoirs, and was a factor not only in the attitude of Americans toward the British, but in the negotiations to formally end the war. After the war, East River waterfront development continued once more. New York State legislation, which in 1807 had authorized what would become the Commissioners Plan of 1811, authorized the creation of new land out to 400 feet from the low water mark into the river, and with the advent of gridded streets along the new waterline – Joseph Mangin had laid out such a grid in 1803 in his A Plan and Regulation of the City of New York, which was rejected by the city, but established the concept – the coastline become regularized at the same time that the strait became even narrower. One result of the narrowing of the East River along the shoreline of Manhattan and, later, Brooklyn – which continued until the mid-19th century when the state put a stop to it – was an increase in the speed of its current. Buttermilk Channel, the strait that divides Governors Island from Red Hook in Brooklyn, and which is located directly south of the "mouth" of the East River, was in the early 17th century a fordable waterway across which cattle could be driven. Further investigation by Colonel Jonathan Williams determined that the channel was by 1776 three fathoms deep (18 feet (5.5 m)), five fathoms deep (30 feet (9.1 m)) in the same spot by 1798, and when surveyed by Williams in 1807 had deepened to 7 fathoms (42 feet (13 m)) at low tide. What had been almost a bridge between two landforms that were once connected had become a fully navigable channel, thanks to the constriction of the East River and the increased flow it caused. Soon, the current in the East River had become so strong that larger ships had to use auxiliary steam power in order to turn. The continued narrowing of the channel on both side may have been the reasoning behind the suggestion of one New York State Senator, who wanted to fill in the East River and annex Brooklyn, with the cost of doing so being covered by selling the newly made land. Others proposed a dam at Roosevelt Island (then Blackwell's Island) to create a wet basin for shipping. Filling in part of the river was also proposed in 1867 by engineer James E. Serrell, later a city surveyor, but with emphasis on solving the problem of Hell Gate. Serrell proposed filling in Hell Gate and building a "New East River" through Queens with an extension to Westchester County. Serrell's plan – which he publicized with maps, essay and lectures as well as presentations to the city, state and federal governments – would have filled in the river from 14th Street to 125th Street. The New East River through Queens would be about three times the average width of the existing one at an even 3,600 feet (1,100 m) throughout, and would run as straight as an arrow for five miles (8.0 km). The new land, and the portions of Queens which would become part of Manhattan, adding 2,500 acres (1,000 ha), would be covered with an extension of the existing street grid of Manhattan. Variations on Serrell's plan would be floated over the years. A pseudonymous "Terra Firma" brought up filling in the East River again in the Evening Post and Scientific American in 1904, and Thomas Alva Edison took it up in 1906. Then Thomas Kennard Thompson, a bridge and railway engineer, proposed in 1913 to fill in the river from Hell Gate to the tip of Manhattan and, as Serrell had suggested, make a new canalized East River, only this time from Flushing Bay to Jamaica Bay. He would also expand Brooklyn into the Upper Harbor, put up a dam from Brooklyn to Staten Island, and make extensive landfill in the Lower Bay. At around the same time, in the 1920s, John A. Harriss, New York City's chief traffic engineer, who had developed the first traffic signals in the city, also had plans for the river. Harriss wanted to dam the East River at Hell Gate and the Williamsburg Bridge, then remove the water, put a roof over it on stilts, and build boulevards and pedestrian lanes on the roof along with "majestic structures", with transportation services below. The East River's course would, once again, be shifted to run through Queens, and this time Brooklyn as well, to channel it to the Harbor. Periodically, merchants and other interested parties would try to get something done about the difficulty of navigating through Hell Gate. In 1832, the New York State legislature was presented with a petition for a canal to be built through nearby Hallet's Point, thus avoiding Hell Gate altogether. Instead, the legislature responded by providing ships with pilots trained to navigate the shoals for the next 15 years. In 1849, a French engineer whose specialty was underwater blasting, Benjamin Maillefert, had cleared some of the rocks which, along with the mix of tides, made the Hell Gate stretch of the river so dangerous to navigate. Ebenezer Meriam had organized a subscription to pay Maillefert $6,000 to, for instance, reduce "Pot Rock" to provide 24 feet (7.3 m) of depth at low-mean water. While ships continued to run aground (in the 1850s about 2% of ships did so) and petitions continued to call for action, the federal government undertook surveys of the area which ended in 1851 with a detailed and accurate map. By then Maillefert had cleared the rock "Baldheaded Billy", and it was reported that Pot Rock had been reduced to 20.5 feet (6.2 m), which encouraged the United States Congress to appropriate $20,000 for further clearing of the strait. However, a more accurate survey showed that the depth of Pot Rock was actually a little more than 18 feet (5.5 m), and eventually Congress withdrew its funding. With the main shipping channels through The Narrows into the harbor silting up with sand due to littoral drift, thus providing ships with less depth, and a new generation of larger ships coming online – epitomized by Isambard Kingdom Brunel's SS Great Eastern, popularly known as "Leviathan" – New York began to be concerned that it would start to lose its status as a great port if a "back door" entrance into the harbor was not created. In the 1850s the depth continued to lessen – the harbor commission said in 1850 that the mean water low was 24 feet (7.3 m) and the extreme water low was 23 feet (7.0 m) – while the draft required by the new ships continued to increase, meaning it was only safe for them to enter the harbor at high tide. The U.S. Congress, realizing that the problem needed to be addressed, appropriated $20,000 for the Army Corps of Engineers to continue Maillefert's work. In 1851, the U.S. Army Corps of Engineers, led by General John Newton, began to do the job, in an operation which was to span 70 years. The appropriated money was soon spent without appreciable change in the hazards of navigating the strait. An advisory council recommended in 1856 that the strait be cleared of all obstacles, but nothing was done, and the Civil War soon broke out. In the late 1860s, after the Civil War, Congress realized the military importance of having easily navigable waterways, and charged the Army Corps of Engineers with clearing Hell Gate. Newton estimated that the operation would cost about half as much as the annual losses in shipping. On September 24, 1876, the Corps used 50,000 pounds (23,000 kg) of explosives to blast the rocks, which was followed by further blasting. The process was started by excavating under Hallets reef from Astoria. Cornish miners, assisted by steam drills, dug galleries under the reef, which were then interconnected. They later drilled holes for explosives. A patent was issued for the detonating device. After the explosion, the rock debris was dredged and dropped into a deep part of the river. This was not repeated at the later Flood Rock explosion. On October 10, 1885, the Corps carried out the largest explosion in this process, annihilating Flood Rock with 300,000 pounds (140,000 kg) of explosives. The blast was felt as far away as Princeton, New Jersey (50 miles). It sent a geyser of water 250 feet (76 m) in the air. The blast has been described as "the largest planned explosion before testing began for the atomic bomb", although the detonation at the Battle of Messines in 1917 was larger. Some of the rubble from the detonation was used in 1890 to fill the gap between Great Mill Rock and Little Mill Rock, merging the two islands into a single island, Mill Rock. At the same time that Hell Gate was being cleared, the Harlem River Ship Canal was being planned. When it was completed in 1895, the "back door" to New York's center of ship-borne trade in the docks and warehouses of the East River was open from two directions, through the cleared East River, and from the Hudson River through the Harlem River to the East River. Ironically, though, while both forks of the northern shipping entrance to the city were now open, modern dredging techniques had cut through the sandbars of the Atlantic Ocean entrance, allowing new, even larger ships to use that traditional passage into New York's docks. At the beginning of the 19th century, the East River was the center of New York's shipping industry, but by the end of the century, much of it had moved to the Hudson River, leaving the East River wharves and slips to begin a long process of decay, until the area was finally rehabilitated in the mid-1960s, and the South Street Seaport Museum was opened in 1967. By 1870, the condition of the Port of New York along both the East and Hudson Rivers had so deteriorated that the New York State legislature created the Department of Docks to renovate the port and keep New York competitive with other ports on the American East Coast. The Department of Docks was given the task of creating the master plan for the waterfront, and General George B. McClellan was engaged to head the project. McClellan held public hearings and invited plans to be submitted, ultimately receiving 70 of them, although in the end he and his successors put his own plan into effect. That plan called for the building of a seawall around Manhattan island from West 61st Street on the Hudson, around The Battery, and up to East 51st Street on the East River. The area behind the masonry wall (mostly concrete but in some parts granite blocks) would be filled in with landfill, and wide streets would be laid down on the new land. In this way, a new edge for the island (or at least the part of it used as a commercial port) would be created. The department had surveyed 13,700 feet (4,200 m) of shoreline by 1878, as well as documenting the currents and tides. By 1900, 75 miles (121 km) had been surveyed and core samples had been taken to inform the builders of how deep the bedrock was. The work was completed just as World War I began, allowing the Port of New York to be a major point of embarkation for troops and materiel. The new seawall helps protect Manhattan island from storm surges, although it is only 5 feet (1.5 m) above the mean sea level, so that particularly dangerous storms, such as the nor'easter of 1992 and Hurricane Sandy in 2012, which hit the city in a way to create surges which are much higher, can still do significant damage. (The Hurricane of September 3, 1821, created the biggest storm surge on record in New York City: a rise of 13 feet (4.0 m) in one hour at the Battery, flooding all of lower Manhattan up to Canal Street.) Still, the new seawall begun in 1871 gave the island a firmer edge, improved the quality of the port, and continues to protect Manhattan from normal storm surges. The Brooklyn Bridge, completed in 1883, was the first bridge to span the East River, connecting the cities of New York and Brooklyn, and all but replacing the frequent ferry service between them, which did not return until the late 20th century. The bridge offered cable car service across the span. The Brooklyn Bridge was followed by the Williamsburg Bridge (1903), the Queensboro Bridge (1909), the Manhattan Bridge (1912) and the Hell Gate Railroad Bridge (1916). Later would come the Triborough Bridge (1936), the Bronx-Whitestone Bridge (1939), the Throgs Neck Bridge (1961) and the Rikers Island Bridge (1966). In addition, numerous rail tunnels pass under the East River – most of them part of the New York City Subway system – as does the Brooklyn-Battery Tunnel and the Queens-Midtown Tunnel. (See Crossings below for details.) Also under the river is Water Tunnel #1 of the New York City water supply system, built in 1917 to extend the Manhattan portion of the tunnel to Brooklyn, and via City Tunnel #2 (1936) to Queens; these boroughs became part of New York City after the city's consolidation in 1898. City Tunnel #3 will also run under the river, under the northern tip of Roosevelt Island, and is expected to not be completed until at least 2026; the Manhattan portion of the tunnel went into service in 2013. Philanthropist John D. Rockefeller founded what is now Rockefeller University in 1901, between 63rd and 64th Streets on the river side of York Avenue, overlooking the river. The university is a research university for doctoral and post-doctoral scholars, primarily in the fields of medicine and biological science. North of it is one of the major medical centers in the city, NewYork Presbyterian / Weill Cornell Medical Center, which is associated with the medical schools of both Columbia University and Cornell University. Although it can trace its history back to 1771, the center on York Avenue, much of which overlooks the river, was built in 1932. The East River was the site of one of the greatest disasters in the history of New York City when, in June 1904, the PS General Slocum sank near North Brother Island due to a fire. It was carrying 1,400 German-Americans to a picnic site on Long Island for an annual outing. There were only 321 survivors of the disaster, one of the worst losses of life in the city's long history, and a devastating blow to the Little Germany neighborhood on the Lower East Side. The captain of the ship and the managers of the company that owned it were indicted, but only the captain was convicted; he spent 3 and a half years of his 10-year sentence at Sing Sing Prison before being released by a Federal parole board, and then pardoned by President William Howard Taft. Beginning in 1934, and then again from 1948 to 1966, the Manhattan shore of the river became the location for the limited-access East River Drive, which was later renamed after Franklin Delano Roosevelt, and is universally known by New Yorkers as the "FDR Drive". The road is sometimes at grade, sometimes runs under locations such as the site of the Headquarters of the United Nations and Carl Schurz Park and Gracie Mansion – the mayor's official residence, and is at time double-decked, because Hell Gate provides no room for more landfill. It begins at Battery Park, runs past the Brooklyn, Manhattan, Williamsburg and Queensboro Bridges, and the Ward's Island Footbridge, and terminates just before the Robert F. Kennedy Triboro Bridge when it connects to the Harlem River Drive. Between most of the FDR Drive and the River is the East River Greenway, part of the Manhattan Waterfront Greenway. The East River Greenway was primarily built in connection with the building of the FDR Drive, although some portions were built as recently as 2002, and other sections are still incomplete. In 1963, Con Edison built the Ravenswood Generating Station on the Long Island City shore of the river, on land some of which was once stone quarries which provided granite and marble slabs for Manhattan's buildings. The plant has since been owned by KeySpan. National Grid and TransCanada, the result of deregulation of the electrical power industry. The station, which can generate about 20% of the electrical needs of New York City – approximately 2,500 megawatts – receives some of its fuel by oil barge. North of the power plant can be found Socrates Sculpture Park, an illegal dumpsite and abandoned landfill that in 1986 was turned into an outdoor museum, exhibition space for artists, and public park by sculptor Mark di Suvero and local activists. The area also contains Rainey Park, which honors Thomas C. Rainey, who attempted for 40 years to get a bridge built in that location from Manhattan to Queens. The Queensboro Bridge was eventually built south of this location. In 2011, NY Waterway started operating its East River Ferry line. The route was a 7-stop East River service that runs in a loop between East 34th Street and Hunters Point, making two intermediate stops in Brooklyn and three in Queens. The ferry, an alternative to the New York City Subway, cost $4 per one-way ticket. It was instantly popular: from June to November 2011, the ferry saw 350,000 riders, over 250% of the initial ridership forecast of 134,000 riders. In December 2016, in preparation for the start of NYC Ferry service the next year, Hornblower Cruises purchased the rights to operate the East River Ferry. NYC Ferry started service on May 1, 2017, with the East River Ferry as part of the system. In February 2012 the federal government announced an agreement with Verdant Power to install 30 tidal turbines in the channel of the East River. The turbines were projected to begin operations in 2015 and are supposed to produce 1.05 megawatts of power. The strength of the current foiled an earlier effort in 2007 to tap the river for tidal power. On May 7, 2017, the catastrophic failure of a Con Edison substation in Brooklyn caused a spill into the river of over 5,000 US gallons (18,927 L; 4,163 imp gal) of dielectric fluid, a synthetic mineral oil used to cool electrical equipment and prevent electrical discharges. (See below.) At the end of 2022, gold miner John Reeves claimed that up to 50 tons of ice age artifacts bound for the American Museum of Natural History , including mammoth remains, had been dumped into the East River near 65th Street. Although the museum denied that any fossils had been dumped into the river, Reeves's allegations prompted commercial divers to search the river for evidence of mammoth bones. Throughout most of the history of New York City, and New Amsterdam before it, the East River has been the receptacle for the city's garbage and sewage. "Night men" who collected "night soil" from outdoor privies would dump their loads into the river, and even after the construction of the Croton Aqueduct (1842) and then the New Croton Aqueduct (1890) gave rise to indoor plumbing, the waste that was flushed away into the sewers, where it mixed with ground runoff, ran directly into the river, untreated. The sewers terminated at the slips where ships docked, until the waste began to build up, preventing dockage, after which the outfalls were moved to the end of the piers. The "landfill" which created new land along the shoreline when the river was "wharfed out" by the sale of "water lots" was largely garbage such as bones, offal, and even whole dead animals, along with excrement – human and animal. The result was that by the 1850s, if not before, the East River, like the other waterways around the city, was undergoing the process of eutrophication where the increase in nitrogen from excrement and other sources led to a decrease in free oxygen, which in turn led to an increase in phytoplankton such as algae and a decrease in other life forms, breaking the area's established food chain. The East River became very polluted, and its animal life decreased drastically. In an earlier time, one person had described the transparency of the water: "I remember the time, gentlemen, when you could go in twelve feet of water and you could see the pebbles on the bottom of this river." As the water got more polluted, it darkened, underwater vegetation (such as photosynthesizing seagrass) began dying, and as the seagrass beds declined, the many associated species of their ecosystems declined as well, contributing to the decline of the river. Also harmful was the general destruction of the once plentiful oyster beds in the waters around the city, and the over-fishing of menhaden, or mossbunker, a small silvery fish which had been used since the time of the Native Americans for fertilizing crops – however it took 8,000 of these schooling fish to fertilize a single acre, so mechanized fishing using the purse seine was developed, and eventually the menhaden population collapsed. Menhaden feed on phytoplankton, helping to keep them in check, and are also a vital step in the food chain, as bluefish, striped bass and other fish species which do not eat phytoplankton feed on the menhaden. The oyster is another filter feeder: oysters purify 10 to 100 gallons a day, while each menhaden filters four gallons in a minute, and their schools were immense: one report had a farmer collecting 20 oxcarts worth of menhaden using simple fishing nets deployed from the shore. The combination of more sewage, due to the availability of more potable water – New York's water consumption per capita was twice that of Europe – indoor plumbing, the destruction of filter feeders, and the collapse of the food chain, damaged the ecosystem of the waters around New York, including the East River, almost beyond repair. Because of these changes to the ecosystem, by 1909, the level of dissolved-oxygen in the lower part of the river had declined to less than 65%, where 55% of saturation is the point at which the amount of fish and the number of their species begins to be affected. Only 17 years later, by 1926, the level of dissolved oxygen in the river had fallen to 13%, below the point at which most fish species can survive. Due to heavy pollution, the East River is dangerous to people who fall in or attempt to swim in it, although as of mid-2007 the water was cleaner than it had been in decades. As of 2010, the New York City Department of Environmental Protection (DEP) categorizes the East River as Use Classification I, meaning it is safe for secondary contact activities such as boating and fishing. According to the marine sciences section of the DEP, the channel is swift, with water moving as fast as four knots, just as it does in the Hudson River on the other side of Manhattan. That speed can push casual swimmers out to sea. A few people drown in the waters around New York City each year. As of 2013, it was reported that the level of bacteria in the river was below Federal guidelines for swimming on most days, although the readings may vary significantly, so that the outflow from Newtown Creek or the Gowanus Canal can be tens or hundreds of times higher than recommended, according to Riverkeeper, a non-profit environmentalist advocacy group. The counts are also higher along the shores of the strait than they are in the middle of its flow. Nevertheless, the "Brooklyn Bridge Swim" is an annual event where swimmers cross the channel from Brooklyn Bridge Park to Manhattan. Thanks to reductions in pollution, cleanups, the restriction of development, and other environmental controls, the East River along Manhattan is one of the areas of New York's waterways – including the Hudson-Raritan Estuary and both shores of Long Island – which have shown signs of the return of biodiversity. On the other hand, the river is also under attack from hardy, competitive, alien species, such as the European green crab, which is considered to be one of the world's ten worst invasive species, and is present in the river. On May 7, 2017, the catastrophic failure of Con Edison's Farragut Substation at 89 John Street in Dumbo, Brooklyn, caused a spill of dielectric fluid – an insoluble synthetic mineral oil, considered non-toxic by New York state, used to cool electrical equipment and prevent electrical discharges – into the East River from a 37,000-US-gallon (140,060 L; 30,809 imp gal) tank. The National Response Center received a report of the spill at 1:30pm that day, although the public did not learn of the spill for two days, and then only from tweets from NYC Ferry. A "safety zone" was established, extending from a line drawn between Dupont Street in Greenpoint, Brooklyn, to East 25th Street in Kips Bay, Manhattan, south to Buttermilk Channel. Recreational and human-powered vehicles such as kayaks and paddleboards were banned from the zone while the oil was being cleaned up, and the speed of commercial vehicles restricted so as not to spread the oil in their wakes, causing delays in NYC Ferry service. The clean-up efforts were being undertaken by Con Edison personnel and private environmental contractors, the U.S. Coast Guard, and the New York State Department of Environmental Conservation, with the assistance of NYC Emergency Management. The loss of the sub-station caused a voltage dip in the power provided by Con Ed to the Metropolitan Transportation Authority's New York City Subway system, which disrupted its signals. The Coast Guard estimated that 5,200 US gallons (19,684 L; 4,330 imp gal) of oil spilled into the water, with the remainder soaking into the soil at the substation. In the past the Coast Guard has on average been able to recover about 10% of oil spilled, however the complex tides in the river make the recovery much more difficult, with the turbulent water caused by the river's change of tides pushing contaminated water over the containment booms, where it is then carried out to sea and cannot be recovered. By Friday May 12, officials from Con Edison reported that almost 600 US gallons (2,271 L; 500 imp gal) had been taken out of the water. Environmental damage to wildlife is expected to be less than if the spill was of petroleum-based oil, but the oil can still block the sunlight necessary for the river's fish and other organisms to live. Nesting birds are also in possible danger from the oil contaminating their nests and potentially poisoning the birds or their eggs. Water from the East River was reported to have tested positive for low levels of PCB, a known carcinogen. Putting the spill into perspective, John Lipscomb, the vice president of advocacy for Riverkeepers said that the chronic release after heavy rains of overflow from city's wastewater treatment system was "a bigger problem for the harbor than this accident." The state Department of Environmental Conservation is investigating the spill. It was later reported that according to DEC data which dates back to 1978, the substation involved had spilled 179 times previously, more than any other Con Ed facility. The spills have included 8,400 gallons of dielectric oil, hydraulic oil, and antifreeze which leaked at various times into the soil around the substation, the sewers, and the East River. On June 22, Con Edison used non-toxic green dye and divers in the river to find the source of the leak. As a result, a 4-inch (10 cm) hole was plugged. The utility continued to believe that the bulk of the spill went into the ground around the substation, and excavated and removed several hundred cubic yards of soil from the area. They estimated that about 5,200 US gallons (19,684 L; 4,330 imp gal) went into the river, of which 520 US gallons (1,968 L; 433 imp gal) were recovered. Con Edison said that it installed a new transformer, and intended to add new barrier around the facility to help guard against future spills propagating into the river. Informational notes Citations Bibliography
[ { "paragraph_id": 0, "text": "The East River is a saltwater tidal estuary in New York City. The waterway, which is actually not a river despite its name, connects Upper New York Bay on its south end to Long Island Sound on its north end. It separates Long Island, with the boroughs of Brooklyn and Queens, from Manhattan Island and from the Bronx (on the North American mainland).", "title": "" }, { "paragraph_id": 1, "text": "Because of its connection to Long Island Sound, it was once also known as the Sound River. The tidal strait changes its direction of flow regularly, and is subject to strong fluctuations in its current, which are accentuated by its narrowness and variety of depths. The waterway is navigable for its entire length of 16 miles (26 km), and was historically the center of maritime activities in the city.", "title": "" }, { "paragraph_id": 2, "text": "Technically a drowned valley, like the other waterways around New York City, the strait was formed approximately 11,000 years ago at the end of the Wisconsin glaciation. The distinct change in the shape of the strait between the lower and upper portions is evidence of this glacial activity. The upper portion (from Long Island Sound to Hell Gate), running largely perpendicular to the glacial motion, is wide, meandering, and has deep narrow bays on both banks, scoured out by the glacier's movement. The lower portion (from Hell Gate to New York Bay) runs north–south, parallel to the glacial motion. It is much narrower, with straight banks. The bays that exist, as well as those that used to exist before being filled in by human activity, are largely wide and shallow.", "title": "Formation and description" }, { "paragraph_id": 3, "text": "The section known as \"Hell Gate\" – from the Dutch name Hellegat meaning either \"bright strait\" or \"clear opening\", given to the entire river in 1614 by explorer Adriaen Block when he passed through it in his ship Tyger – is a narrow, turbulent, and particularly treacherous stretch of the river. Tides from the Long Island Sound, New York Harbor and the Harlem River meet there, making it difficult to navigate, especially because of the number of rocky islets which once dotted it, with names such as \"Frying Pan\", \"Pot, Bread and Cheese\", \"Hen and Chicken\", \"Heel Top\"; \"Flood\"; and \"Gridiron\", roughly 12 islets and reefs in all, all of which led to a number of shipwrecks, including HMS Hussar, a British frigate that sank in 1780 while supposedly carrying gold and silver intended to pay British troops. The stretch has since been cleared of rocks and widened. Washington Irving wrote of Hell Gate that the current sounded \"like a bull bellowing for more drink\" at half tide, while at full tide it slept \"as soundly as an alderman after dinner\". He said it was like \"a peaceable fellow enough when he has no liquor at all, or when he has a skinful, but who, when half-seas over, plays the very devil.\" The tidal regime is complex, with the two major tides – from the Long Island Sound and from the Atlantic Ocean – separated by about two hours; and this is without consideration of the tidal influence of the Harlem River, all of which creates a \"dangerous cataract\", as one ship's captain put it.", "title": "Formation and description" }, { "paragraph_id": 4, "text": "The river is navigable for its entire length of 16 miles (26 km). In 1939 it was reported that the stretch from The Battery to the former Brooklyn Navy Yard near Wallabout Bay, a run of about 1,000 yards (910 m), was 40 feet (12 m) deep, the long section from there, running to the west of Roosevelt Island, through Hell Gate and to Throg's Neck was at least 35 feet (11 m) deep, and then eastward from there the river was, at mean low tide, 168 feet (51 m) deep.", "title": "Formation and description" }, { "paragraph_id": 5, "text": "The broadness of the river's channel south of Roosevelt Island is caused by the dipping of the hardy Fordham gneiss underlying the island under the less strong Inwood marble which lies under the river bed. Why the river turns to the east as it approaches the three lower Manhattan bridges is geologically unknown.", "title": "Formation and description" }, { "paragraph_id": 6, "text": "Roosevelt Island, a long (2-mile (3.2 km)) and narrow (800 feet (240 m)) landmass, lies in the stretch of the river between Manhattan Island and the borough of Queens roughly paralleling Manhattan's East 46th–86th Streets. The abrupt termination of the island on its north end is due to an extension of the 125th Street Fault. Politically, the island's 147 acres (0.59 km) constitute part of the borough of Manhattan. It is connected to Queens by the Roosevelt Island Bridge, to Manhattan by the Roosevelt Island Tramway, and to both boroughs by a subway station served by the F train. The Queensboro Bridge also runs across Roosevelt Island, and an elevator allowing both pedestrian and vehicular access to the island was added to the bridge in 1930, but elevator service was discontinued in 1955 following the opening of the Roosevelt Island Bridge, and the elevator was demolished in 1970. The island, which was formerly known as Blackwell's Island and Welfare Island before being renamed in honor of US President Franklin D. Roosevelt, historically served as the site of a penitentiary and a number of hospitals; today, it is dominated by residential neighborhoods consisting of large apartment buildings and parkland (much of which is dotted with the ruins of older structures).", "title": "Formation and description" }, { "paragraph_id": 7, "text": "The largest land mass in the River south of Roosevelt Island is U Thant Island, an artificial islet created during the construction of the Steinway Tunnel (which currently serves the subway's 7 and <7> lines). Officially named Belmont Island after one of the tunnel's financiers, the landmass owes its popular name (after Burmese diplomat U Thant, former Secretary-General of the United Nations) to the efforts of a group associated with the guru Sri Chinmoy that held mediation meetings on the island in the 1970s. Today, the island is owned by New York State and serves as a migratory bird sanctuary that is closed to visitors.", "title": "Formation and description" }, { "paragraph_id": 8, "text": "Proceeding north and east from Roosevelt Island, the River's principal islands include Manhattan's Mill Rock, an 8.6-acre (3.5 ha) island located about 1000 feet from Manhattan's East 96th Street; Manhattan's 520-acre Randalls and Wards Islands, two formerly separate islands joined by landfill that are home to a large public park, a number of public institutions, and the supports for the Triborough and the Hell Gate Bridges; the Bronx's Rikers Island, once under 100 acres (0.40 km) but now over 400 acres (1.6 km) following extensive landfill expansion after the island's 1884 purchase by the city as a prison farm and still home to New York City's massive and controversial primary jail complex; and North and South Brother Islands, both of which also constitute part of the Bronx.", "title": "Formation and description" }, { "paragraph_id": 9, "text": "The Bronx River, Pugsley Creek, and Westchester Creek drain into the northern bank of the East River in the northern section of the strait. The Flushing River, historically known as Flushing Creek, empties into the strait's southern bank near LaGuardia Airport via Flushing Bay. Further west, Luyster Creek drains into the East River in Astoria, Queens.", "title": "Formation and description" }, { "paragraph_id": 10, "text": "North of Randalls Island, it is joined by the Bronx Kill. Along the east of Wards Island, at approximately the strait's midpoint, it narrows into a channel called Hell Gate, which is spanned by both the Robert F. Kennedy Bridge (formerly the Triborough), and the Hell Gate Bridge. On the south side of Wards Island, it is joined by the Harlem River.", "title": "Formation and description" }, { "paragraph_id": 11, "text": "Newtown Creek on Long Island, which itself contained several tributaries, drains into the East River and forms part of the boundary between Queens and Brooklyn. Bushwick Inlet and Wallabout Bay on Long Island also drain into the strait on the Long Island side. The Gowanus Canal was built from Gowanus Creek, which emptied into the river.", "title": "Formation and description" }, { "paragraph_id": 12, "text": "Historically, there were other small streams which emptied into the river, though these and their associated wetlands have been filled in and built over. These small streams included the Harlem Creek, one of the most significant tributaries originating in Manhattan. Other streams that emptied into the East River included the Sawkill in Manhattan, Mill Brook in the Bronx, and Sunswick Creek in Queens.", "title": "Formation and description" }, { "paragraph_id": 13, "text": "Prior to the arrival of Europeans, the land north of the East River was occupied by the Siwanoys, one of many groups of Algonquin-speaking Lenapes in the area. Those of the Lenapes who lived in the northern part of Manhattan Island in a campsite known as Konaande Kongh used a landing at around the current location of East 119th street to paddle into the river in canoes fashioned from tree trunks in order to fish.", "title": "History" }, { "paragraph_id": 14, "text": "Dutch settlement of what became New Amsterdam began in 1623. Some of the earliest of the small settlements in the area were along the west bank of the East River on sites that had previously been Native American settlements. As with the Native Americans, the river was central to their lives for transportation for trading and for fishing. They gathered marsh grass to feed their cattle, and the East River's tides helped to power mills which ground grain to flour. By 1642 there was a ferry running on the river between Manhattan island and what is now Brooklyn, and the first pier on the river was built in 1647 at Pearl and Broad Streets. After the British took over the colony in 1664, which was renamed \"New York\", the development of the waterfront continued, and a shipbuilding industry grew up once New York started exporting flour. By the end of the 17th century, the Great Dock, located at Corlear's Hook on the East River, had been built.", "title": "History" }, { "paragraph_id": 15, "text": "Historically, the lower portion of the strait, which separates Manhattan from Brooklyn, was one of the busiest and most important channels in the world, particularly during the first three centuries of New York City's history. Because the water along the lower Manhattan shoreline was too shallow for large boats to tie up and unload their goods, from 1686 on – after the signing of the Dongan Charter, which allowed intertidal land to be owned and sold – the shoreline was \"wharfed out\" to the high-water mark by constructing retaining walls that were filled in with every conceivable kind of landfill: excrement, dead animals, ships deliberately sunk in place, ship ballast, and muck dredged from the bottom of the river. On the new land were built warehouses and other structures necessary for the burgeoning sea trade. Many of the \"water-lot\" grants went to the rich and powerful families of the merchant class, although some went to tradesmen. By 1700, the Manhattan bank of the river had been \"wharfed-out\" up to around Whitehall Street, narrowing the strait of the river.", "title": "History" }, { "paragraph_id": 16, "text": "After the signing of the Montgomerie Charter in the late 1720s, another 127 acres of land along the Manhattan shore of the East River was authorized to be filled-in, this time to a point 400 feet beyond the low-water mark; the parts that had already been expanded to the low water mark – much of which had been devastated by a coastal storm in the early 1720s and a nor'easter in 1723 – were also expanded, narrowing the channel even further. What had been quiet beach land was to become new streets and buildings, and the core of the city's sea-borne trade. This infilling went as far north as Corlear's Hook. In addition, the city was given control of the western shore of the river from Wallabout Bay south.", "title": "History" }, { "paragraph_id": 17, "text": "Expansion of the waterfront halted during the American Revolution, in which the East River played an important role early in the conflict. On August 28, 1776, while British and Hessian troops rested after besting the Americans at the Battle of Long Island, General George Washington was rounding up all the boats on the east shore of the river, in what is now Brooklyn, and used them to successfully move his troops across the river – under cover of night, rain, and fog – to Manhattan island, before the British could press their advantage. Thus, though the battle was a victory for the British, the failure of Sir William Howe to destroy the Continental Army when he had the opportunity allowed the Americans to continue fighting. Without the stealthy withdrawal across the East River, the American Revolution might have ended much earlier.", "title": "History" }, { "paragraph_id": 18, "text": "Wallabout Bay on the River was the site of most of the British prison ships – most notoriously HMS Jersey – where thousands of American prisoners of war were held in terrible conditions. These prisoners had come into the hands of the British after the fall of New York City on September 15, 1776, after the American loss at the Battle of Long Island and the loss of Fort Washington on November 16. Prisoners began to be housed on the broken-down warships and transports in December; about 24 ships were used in total, but generally only 5 or 6 at a time. Almost twice as many Americans died from neglect in these ships than did from all the battles in the war: as many as 12,000 soldiers, sailors and civilians. The bodies were thrown overboard or were buried in shallow graves on the riverbanks, but their bones – some of which were collected when they washed ashore – were later relocated and are now inside the Prison Ship Martyrs' Monument in nearby Fort Greene Park. The existence of the ships and the conditions the men were held in was widely known at the time through letters, diaries and memoirs, and was a factor not only in the attitude of Americans toward the British, but in the negotiations to formally end the war.", "title": "History" }, { "paragraph_id": 19, "text": "After the war, East River waterfront development continued once more. New York State legislation, which in 1807 had authorized what would become the Commissioners Plan of 1811, authorized the creation of new land out to 400 feet from the low water mark into the river, and with the advent of gridded streets along the new waterline – Joseph Mangin had laid out such a grid in 1803 in his A Plan and Regulation of the City of New York, which was rejected by the city, but established the concept – the coastline become regularized at the same time that the strait became even narrower.", "title": "History" }, { "paragraph_id": 20, "text": "One result of the narrowing of the East River along the shoreline of Manhattan and, later, Brooklyn – which continued until the mid-19th century when the state put a stop to it – was an increase in the speed of its current. Buttermilk Channel, the strait that divides Governors Island from Red Hook in Brooklyn, and which is located directly south of the \"mouth\" of the East River, was in the early 17th century a fordable waterway across which cattle could be driven. Further investigation by Colonel Jonathan Williams determined that the channel was by 1776 three fathoms deep (18 feet (5.5 m)), five fathoms deep (30 feet (9.1 m)) in the same spot by 1798, and when surveyed by Williams in 1807 had deepened to 7 fathoms (42 feet (13 m)) at low tide. What had been almost a bridge between two landforms that were once connected had become a fully navigable channel, thanks to the constriction of the East River and the increased flow it caused. Soon, the current in the East River had become so strong that larger ships had to use auxiliary steam power in order to turn. The continued narrowing of the channel on both side may have been the reasoning behind the suggestion of one New York State Senator, who wanted to fill in the East River and annex Brooklyn, with the cost of doing so being covered by selling the newly made land. Others proposed a dam at Roosevelt Island (then Blackwell's Island) to create a wet basin for shipping.", "title": "History" }, { "paragraph_id": 21, "text": "Filling in part of the river was also proposed in 1867 by engineer James E. Serrell, later a city surveyor, but with emphasis on solving the problem of Hell Gate. Serrell proposed filling in Hell Gate and building a \"New East River\" through Queens with an extension to Westchester County. Serrell's plan – which he publicized with maps, essay and lectures as well as presentations to the city, state and federal governments – would have filled in the river from 14th Street to 125th Street. The New East River through Queens would be about three times the average width of the existing one at an even 3,600 feet (1,100 m) throughout, and would run as straight as an arrow for five miles (8.0 km). The new land, and the portions of Queens which would become part of Manhattan, adding 2,500 acres (1,000 ha), would be covered with an extension of the existing street grid of Manhattan.", "title": "History" }, { "paragraph_id": 22, "text": "Variations on Serrell's plan would be floated over the years. A pseudonymous \"Terra Firma\" brought up filling in the East River again in the Evening Post and Scientific American in 1904, and Thomas Alva Edison took it up in 1906. Then Thomas Kennard Thompson, a bridge and railway engineer, proposed in 1913 to fill in the river from Hell Gate to the tip of Manhattan and, as Serrell had suggested, make a new canalized East River, only this time from Flushing Bay to Jamaica Bay. He would also expand Brooklyn into the Upper Harbor, put up a dam from Brooklyn to Staten Island, and make extensive landfill in the Lower Bay. At around the same time, in the 1920s, John A. Harriss, New York City's chief traffic engineer, who had developed the first traffic signals in the city, also had plans for the river. Harriss wanted to dam the East River at Hell Gate and the Williamsburg Bridge, then remove the water, put a roof over it on stilts, and build boulevards and pedestrian lanes on the roof along with \"majestic structures\", with transportation services below. The East River's course would, once again, be shifted to run through Queens, and this time Brooklyn as well, to channel it to the Harbor.", "title": "History" }, { "paragraph_id": 23, "text": "Periodically, merchants and other interested parties would try to get something done about the difficulty of navigating through Hell Gate. In 1832, the New York State legislature was presented with a petition for a canal to be built through nearby Hallet's Point, thus avoiding Hell Gate altogether. Instead, the legislature responded by providing ships with pilots trained to navigate the shoals for the next 15 years.", "title": "History" }, { "paragraph_id": 24, "text": "In 1849, a French engineer whose specialty was underwater blasting, Benjamin Maillefert, had cleared some of the rocks which, along with the mix of tides, made the Hell Gate stretch of the river so dangerous to navigate. Ebenezer Meriam had organized a subscription to pay Maillefert $6,000 to, for instance, reduce \"Pot Rock\" to provide 24 feet (7.3 m) of depth at low-mean water. While ships continued to run aground (in the 1850s about 2% of ships did so) and petitions continued to call for action, the federal government undertook surveys of the area which ended in 1851 with a detailed and accurate map. By then Maillefert had cleared the rock \"Baldheaded Billy\", and it was reported that Pot Rock had been reduced to 20.5 feet (6.2 m), which encouraged the United States Congress to appropriate $20,000 for further clearing of the strait. However, a more accurate survey showed that the depth of Pot Rock was actually a little more than 18 feet (5.5 m), and eventually Congress withdrew its funding.", "title": "History" }, { "paragraph_id": 25, "text": "With the main shipping channels through The Narrows into the harbor silting up with sand due to littoral drift, thus providing ships with less depth, and a new generation of larger ships coming online – epitomized by Isambard Kingdom Brunel's SS Great Eastern, popularly known as \"Leviathan\" – New York began to be concerned that it would start to lose its status as a great port if a \"back door\" entrance into the harbor was not created. In the 1850s the depth continued to lessen – the harbor commission said in 1850 that the mean water low was 24 feet (7.3 m) and the extreme water low was 23 feet (7.0 m) – while the draft required by the new ships continued to increase, meaning it was only safe for them to enter the harbor at high tide.", "title": "History" }, { "paragraph_id": 26, "text": "The U.S. Congress, realizing that the problem needed to be addressed, appropriated $20,000 for the Army Corps of Engineers to continue Maillefert's work. In 1851, the U.S. Army Corps of Engineers, led by General John Newton, began to do the job, in an operation which was to span 70 years. The appropriated money was soon spent without appreciable change in the hazards of navigating the strait. An advisory council recommended in 1856 that the strait be cleared of all obstacles, but nothing was done, and the Civil War soon broke out.", "title": "History" }, { "paragraph_id": 27, "text": "In the late 1860s, after the Civil War, Congress realized the military importance of having easily navigable waterways, and charged the Army Corps of Engineers with clearing Hell Gate. Newton estimated that the operation would cost about half as much as the annual losses in shipping. On September 24, 1876, the Corps used 50,000 pounds (23,000 kg) of explosives to blast the rocks, which was followed by further blasting. The process was started by excavating under Hallets reef from Astoria. Cornish miners, assisted by steam drills, dug galleries under the reef, which were then interconnected. They later drilled holes for explosives. A patent was issued for the detonating device. After the explosion, the rock debris was dredged and dropped into a deep part of the river. This was not repeated at the later Flood Rock explosion.", "title": "History" }, { "paragraph_id": 28, "text": "On October 10, 1885, the Corps carried out the largest explosion in this process, annihilating Flood Rock with 300,000 pounds (140,000 kg) of explosives. The blast was felt as far away as Princeton, New Jersey (50 miles). It sent a geyser of water 250 feet (76 m) in the air. The blast has been described as \"the largest planned explosion before testing began for the atomic bomb\", although the detonation at the Battle of Messines in 1917 was larger. Some of the rubble from the detonation was used in 1890 to fill the gap between Great Mill Rock and Little Mill Rock, merging the two islands into a single island, Mill Rock.", "title": "History" }, { "paragraph_id": 29, "text": "At the same time that Hell Gate was being cleared, the Harlem River Ship Canal was being planned. When it was completed in 1895, the \"back door\" to New York's center of ship-borne trade in the docks and warehouses of the East River was open from two directions, through the cleared East River, and from the Hudson River through the Harlem River to the East River. Ironically, though, while both forks of the northern shipping entrance to the city were now open, modern dredging techniques had cut through the sandbars of the Atlantic Ocean entrance, allowing new, even larger ships to use that traditional passage into New York's docks.", "title": "History" }, { "paragraph_id": 30, "text": "At the beginning of the 19th century, the East River was the center of New York's shipping industry, but by the end of the century, much of it had moved to the Hudson River, leaving the East River wharves and slips to begin a long process of decay, until the area was finally rehabilitated in the mid-1960s, and the South Street Seaport Museum was opened in 1967.", "title": "History" }, { "paragraph_id": 31, "text": "By 1870, the condition of the Port of New York along both the East and Hudson Rivers had so deteriorated that the New York State legislature created the Department of Docks to renovate the port and keep New York competitive with other ports on the American East Coast. The Department of Docks was given the task of creating the master plan for the waterfront, and General George B. McClellan was engaged to head the project. McClellan held public hearings and invited plans to be submitted, ultimately receiving 70 of them, although in the end he and his successors put his own plan into effect. That plan called for the building of a seawall around Manhattan island from West 61st Street on the Hudson, around The Battery, and up to East 51st Street on the East River. The area behind the masonry wall (mostly concrete but in some parts granite blocks) would be filled in with landfill, and wide streets would be laid down on the new land. In this way, a new edge for the island (or at least the part of it used as a commercial port) would be created.", "title": "History" }, { "paragraph_id": 32, "text": "The department had surveyed 13,700 feet (4,200 m) of shoreline by 1878, as well as documenting the currents and tides. By 1900, 75 miles (121 km) had been surveyed and core samples had been taken to inform the builders of how deep the bedrock was. The work was completed just as World War I began, allowing the Port of New York to be a major point of embarkation for troops and materiel.", "title": "History" }, { "paragraph_id": 33, "text": "The new seawall helps protect Manhattan island from storm surges, although it is only 5 feet (1.5 m) above the mean sea level, so that particularly dangerous storms, such as the nor'easter of 1992 and Hurricane Sandy in 2012, which hit the city in a way to create surges which are much higher, can still do significant damage. (The Hurricane of September 3, 1821, created the biggest storm surge on record in New York City: a rise of 13 feet (4.0 m) in one hour at the Battery, flooding all of lower Manhattan up to Canal Street.) Still, the new seawall begun in 1871 gave the island a firmer edge, improved the quality of the port, and continues to protect Manhattan from normal storm surges.", "title": "History" }, { "paragraph_id": 34, "text": "The Brooklyn Bridge, completed in 1883, was the first bridge to span the East River, connecting the cities of New York and Brooklyn, and all but replacing the frequent ferry service between them, which did not return until the late 20th century. The bridge offered cable car service across the span. The Brooklyn Bridge was followed by the Williamsburg Bridge (1903), the Queensboro Bridge (1909), the Manhattan Bridge (1912) and the Hell Gate Railroad Bridge (1916). Later would come the Triborough Bridge (1936), the Bronx-Whitestone Bridge (1939), the Throgs Neck Bridge (1961) and the Rikers Island Bridge (1966). In addition, numerous rail tunnels pass under the East River – most of them part of the New York City Subway system – as does the Brooklyn-Battery Tunnel and the Queens-Midtown Tunnel. (See Crossings below for details.) Also under the river is Water Tunnel #1 of the New York City water supply system, built in 1917 to extend the Manhattan portion of the tunnel to Brooklyn, and via City Tunnel #2 (1936) to Queens; these boroughs became part of New York City after the city's consolidation in 1898. City Tunnel #3 will also run under the river, under the northern tip of Roosevelt Island, and is expected to not be completed until at least 2026; the Manhattan portion of the tunnel went into service in 2013.", "title": "History" }, { "paragraph_id": 35, "text": "Philanthropist John D. Rockefeller founded what is now Rockefeller University in 1901, between 63rd and 64th Streets on the river side of York Avenue, overlooking the river. The university is a research university for doctoral and post-doctoral scholars, primarily in the fields of medicine and biological science. North of it is one of the major medical centers in the city, NewYork Presbyterian / Weill Cornell Medical Center, which is associated with the medical schools of both Columbia University and Cornell University. Although it can trace its history back to 1771, the center on York Avenue, much of which overlooks the river, was built in 1932.", "title": "History" }, { "paragraph_id": 36, "text": "The East River was the site of one of the greatest disasters in the history of New York City when, in June 1904, the PS General Slocum sank near North Brother Island due to a fire. It was carrying 1,400 German-Americans to a picnic site on Long Island for an annual outing. There were only 321 survivors of the disaster, one of the worst losses of life in the city's long history, and a devastating blow to the Little Germany neighborhood on the Lower East Side. The captain of the ship and the managers of the company that owned it were indicted, but only the captain was convicted; he spent 3 and a half years of his 10-year sentence at Sing Sing Prison before being released by a Federal parole board, and then pardoned by President William Howard Taft.", "title": "History" }, { "paragraph_id": 37, "text": "Beginning in 1934, and then again from 1948 to 1966, the Manhattan shore of the river became the location for the limited-access East River Drive, which was later renamed after Franklin Delano Roosevelt, and is universally known by New Yorkers as the \"FDR Drive\". The road is sometimes at grade, sometimes runs under locations such as the site of the Headquarters of the United Nations and Carl Schurz Park and Gracie Mansion – the mayor's official residence, and is at time double-decked, because Hell Gate provides no room for more landfill. It begins at Battery Park, runs past the Brooklyn, Manhattan, Williamsburg and Queensboro Bridges, and the Ward's Island Footbridge, and terminates just before the Robert F. Kennedy Triboro Bridge when it connects to the Harlem River Drive. Between most of the FDR Drive and the River is the East River Greenway, part of the Manhattan Waterfront Greenway. The East River Greenway was primarily built in connection with the building of the FDR Drive, although some portions were built as recently as 2002, and other sections are still incomplete.", "title": "History" }, { "paragraph_id": 38, "text": "In 1963, Con Edison built the Ravenswood Generating Station on the Long Island City shore of the river, on land some of which was once stone quarries which provided granite and marble slabs for Manhattan's buildings. The plant has since been owned by KeySpan. National Grid and TransCanada, the result of deregulation of the electrical power industry. The station, which can generate about 20% of the electrical needs of New York City – approximately 2,500 megawatts – receives some of its fuel by oil barge.", "title": "History" }, { "paragraph_id": 39, "text": "North of the power plant can be found Socrates Sculpture Park, an illegal dumpsite and abandoned landfill that in 1986 was turned into an outdoor museum, exhibition space for artists, and public park by sculptor Mark di Suvero and local activists. The area also contains Rainey Park, which honors Thomas C. Rainey, who attempted for 40 years to get a bridge built in that location from Manhattan to Queens. The Queensboro Bridge was eventually built south of this location.", "title": "History" }, { "paragraph_id": 40, "text": "In 2011, NY Waterway started operating its East River Ferry line. The route was a 7-stop East River service that runs in a loop between East 34th Street and Hunters Point, making two intermediate stops in Brooklyn and three in Queens. The ferry, an alternative to the New York City Subway, cost $4 per one-way ticket. It was instantly popular: from June to November 2011, the ferry saw 350,000 riders, over 250% of the initial ridership forecast of 134,000 riders. In December 2016, in preparation for the start of NYC Ferry service the next year, Hornblower Cruises purchased the rights to operate the East River Ferry. NYC Ferry started service on May 1, 2017, with the East River Ferry as part of the system.", "title": "History" }, { "paragraph_id": 41, "text": "In February 2012 the federal government announced an agreement with Verdant Power to install 30 tidal turbines in the channel of the East River. The turbines were projected to begin operations in 2015 and are supposed to produce 1.05 megawatts of power. The strength of the current foiled an earlier effort in 2007 to tap the river for tidal power.", "title": "History" }, { "paragraph_id": 42, "text": "On May 7, 2017, the catastrophic failure of a Con Edison substation in Brooklyn caused a spill into the river of over 5,000 US gallons (18,927 L; 4,163 imp gal) of dielectric fluid, a synthetic mineral oil used to cool electrical equipment and prevent electrical discharges. (See below.)", "title": "History" }, { "paragraph_id": 43, "text": "At the end of 2022, gold miner John Reeves claimed that up to 50 tons of ice age artifacts bound for the American Museum of Natural History , including mammoth remains, had been dumped into the East River near 65th Street. Although the museum denied that any fossils had been dumped into the river, Reeves's allegations prompted commercial divers to search the river for evidence of mammoth bones.", "title": "History" }, { "paragraph_id": 44, "text": "Throughout most of the history of New York City, and New Amsterdam before it, the East River has been the receptacle for the city's garbage and sewage. \"Night men\" who collected \"night soil\" from outdoor privies would dump their loads into the river, and even after the construction of the Croton Aqueduct (1842) and then the New Croton Aqueduct (1890) gave rise to indoor plumbing, the waste that was flushed away into the sewers, where it mixed with ground runoff, ran directly into the river, untreated. The sewers terminated at the slips where ships docked, until the waste began to build up, preventing dockage, after which the outfalls were moved to the end of the piers. The \"landfill\" which created new land along the shoreline when the river was \"wharfed out\" by the sale of \"water lots\" was largely garbage such as bones, offal, and even whole dead animals, along with excrement – human and animal. The result was that by the 1850s, if not before, the East River, like the other waterways around the city, was undergoing the process of eutrophication where the increase in nitrogen from excrement and other sources led to a decrease in free oxygen, which in turn led to an increase in phytoplankton such as algae and a decrease in other life forms, breaking the area's established food chain. The East River became very polluted, and its animal life decreased drastically.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 45, "text": "In an earlier time, one person had described the transparency of the water: \"I remember the time, gentlemen, when you could go in twelve feet of water and you could see the pebbles on the bottom of this river.\" As the water got more polluted, it darkened, underwater vegetation (such as photosynthesizing seagrass) began dying, and as the seagrass beds declined, the many associated species of their ecosystems declined as well, contributing to the decline of the river. Also harmful was the general destruction of the once plentiful oyster beds in the waters around the city, and the over-fishing of menhaden, or mossbunker, a small silvery fish which had been used since the time of the Native Americans for fertilizing crops – however it took 8,000 of these schooling fish to fertilize a single acre, so mechanized fishing using the purse seine was developed, and eventually the menhaden population collapsed. Menhaden feed on phytoplankton, helping to keep them in check, and are also a vital step in the food chain, as bluefish, striped bass and other fish species which do not eat phytoplankton feed on the menhaden. The oyster is another filter feeder: oysters purify 10 to 100 gallons a day, while each menhaden filters four gallons in a minute, and their schools were immense: one report had a farmer collecting 20 oxcarts worth of menhaden using simple fishing nets deployed from the shore. The combination of more sewage, due to the availability of more potable water – New York's water consumption per capita was twice that of Europe – indoor plumbing, the destruction of filter feeders, and the collapse of the food chain, damaged the ecosystem of the waters around New York, including the East River, almost beyond repair.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 46, "text": "Because of these changes to the ecosystem, by 1909, the level of dissolved-oxygen in the lower part of the river had declined to less than 65%, where 55% of saturation is the point at which the amount of fish and the number of their species begins to be affected. Only 17 years later, by 1926, the level of dissolved oxygen in the river had fallen to 13%, below the point at which most fish species can survive.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 47, "text": "Due to heavy pollution, the East River is dangerous to people who fall in or attempt to swim in it, although as of mid-2007 the water was cleaner than it had been in decades. As of 2010, the New York City Department of Environmental Protection (DEP) categorizes the East River as Use Classification I, meaning it is safe for secondary contact activities such as boating and fishing. According to the marine sciences section of the DEP, the channel is swift, with water moving as fast as four knots, just as it does in the Hudson River on the other side of Manhattan. That speed can push casual swimmers out to sea. A few people drown in the waters around New York City each year.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 48, "text": "As of 2013, it was reported that the level of bacteria in the river was below Federal guidelines for swimming on most days, although the readings may vary significantly, so that the outflow from Newtown Creek or the Gowanus Canal can be tens or hundreds of times higher than recommended, according to Riverkeeper, a non-profit environmentalist advocacy group. The counts are also higher along the shores of the strait than they are in the middle of its flow. Nevertheless, the \"Brooklyn Bridge Swim\" is an annual event where swimmers cross the channel from Brooklyn Bridge Park to Manhattan.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 49, "text": "Thanks to reductions in pollution, cleanups, the restriction of development, and other environmental controls, the East River along Manhattan is one of the areas of New York's waterways – including the Hudson-Raritan Estuary and both shores of Long Island – which have shown signs of the return of biodiversity. On the other hand, the river is also under attack from hardy, competitive, alien species, such as the European green crab, which is considered to be one of the world's ten worst invasive species, and is present in the river.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 50, "text": "On May 7, 2017, the catastrophic failure of Con Edison's Farragut Substation at 89 John Street in Dumbo, Brooklyn, caused a spill of dielectric fluid – an insoluble synthetic mineral oil, considered non-toxic by New York state, used to cool electrical equipment and prevent electrical discharges – into the East River from a 37,000-US-gallon (140,060 L; 30,809 imp gal) tank. The National Response Center received a report of the spill at 1:30pm that day, although the public did not learn of the spill for two days, and then only from tweets from NYC Ferry. A \"safety zone\" was established, extending from a line drawn between Dupont Street in Greenpoint, Brooklyn, to East 25th Street in Kips Bay, Manhattan, south to Buttermilk Channel. Recreational and human-powered vehicles such as kayaks and paddleboards were banned from the zone while the oil was being cleaned up, and the speed of commercial vehicles restricted so as not to spread the oil in their wakes, causing delays in NYC Ferry service. The clean-up efforts were being undertaken by Con Edison personnel and private environmental contractors, the U.S. Coast Guard, and the New York State Department of Environmental Conservation, with the assistance of NYC Emergency Management.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 51, "text": "The loss of the sub-station caused a voltage dip in the power provided by Con Ed to the Metropolitan Transportation Authority's New York City Subway system, which disrupted its signals.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 52, "text": "The Coast Guard estimated that 5,200 US gallons (19,684 L; 4,330 imp gal) of oil spilled into the water, with the remainder soaking into the soil at the substation. In the past the Coast Guard has on average been able to recover about 10% of oil spilled, however the complex tides in the river make the recovery much more difficult, with the turbulent water caused by the river's change of tides pushing contaminated water over the containment booms, where it is then carried out to sea and cannot be recovered. By Friday May 12, officials from Con Edison reported that almost 600 US gallons (2,271 L; 500 imp gal) had been taken out of the water.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 53, "text": "Environmental damage to wildlife is expected to be less than if the spill was of petroleum-based oil, but the oil can still block the sunlight necessary for the river's fish and other organisms to live. Nesting birds are also in possible danger from the oil contaminating their nests and potentially poisoning the birds or their eggs. Water from the East River was reported to have tested positive for low levels of PCB, a known carcinogen.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 54, "text": "Putting the spill into perspective, John Lipscomb, the vice president of advocacy for Riverkeepers said that the chronic release after heavy rains of overflow from city's wastewater treatment system was \"a bigger problem for the harbor than this accident.\" The state Department of Environmental Conservation is investigating the spill. It was later reported that according to DEC data which dates back to 1978, the substation involved had spilled 179 times previously, more than any other Con Ed facility. The spills have included 8,400 gallons of dielectric oil, hydraulic oil, and antifreeze which leaked at various times into the soil around the substation, the sewers, and the East River.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 55, "text": "On June 22, Con Edison used non-toxic green dye and divers in the river to find the source of the leak. As a result, a 4-inch (10 cm) hole was plugged. The utility continued to believe that the bulk of the spill went into the ground around the substation, and excavated and removed several hundred cubic yards of soil from the area. They estimated that about 5,200 US gallons (19,684 L; 4,330 imp gal) went into the river, of which 520 US gallons (1,968 L; 433 imp gal) were recovered. Con Edison said that it installed a new transformer, and intended to add new barrier around the facility to help guard against future spills propagating into the river.", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 56, "text": "", "title": "Ecosystem collapse, pollution and health" }, { "paragraph_id": 57, "text": "Informational notes", "title": "References" }, { "paragraph_id": 58, "text": "Citations", "title": "References" }, { "paragraph_id": 59, "text": "Bibliography", "title": "References" } ]
The East River is a saltwater tidal estuary in New York City. The waterway, which is actually not a river despite its name, connects Upper New York Bay on its south end to Long Island Sound on its north end. It separates Long Island, with the boroughs of Brooklyn and Queens, from Manhattan Island and from the Bronx. Because of its connection to Long Island Sound, it was once also known as the Sound River. The tidal strait changes its direction of flow regularly, and is subject to strong fluctuations in its current, which are accentuated by its narrowness and variety of depths. The waterway is navigable for its entire length of 16 miles (26 km), and was historically the center of maritime activities in the city.
2001-07-26T15:39:12Z
2023-12-01T20:34:36Z
[ "Template:NYCS Canarsie", "Template:Nb5", "Template:Cite Hidden Waters NYC", "Template:Cite greatest", "Template:Short description", "Template:Rp", "Template:NYCS Sixth Rutgers", "Template:NYCS Broadway 60th", "Template:Col-end", "Template:Cite concrete", "Template:HMS", "Template:Jct", "Template:NYCS Williamsburg", "Template:Cite gotham", "Template:Commons category", "Template:Hudson River", "Template:Infobox river", "Template:NYCS Manhattan Bridge", "Template:Cite citygrid", "Template:NYCS Queens 53rd", "Template:Parabr", "Template:Cite web", "Template:ISBN", "Template:Cite unbound", "Template:About", "Template:Circa", "Template:NYCS Flushing", "Template:New York City waterways", "Template:Authority control", "Template:NYCS Eighth far south express", "Template:Cite fednyc", "Template:Cite naming", "Template:See also", "Template:NYCS Montague", "Template:NYCS Broadway-Seventh Brooklyn", "Template:Anchor", "Template:Col-begin", "Template:Cite enc-nyc2", "Template:Cbignore", "Template:Convert", "Template:Lang", "Template:As of", "Template:NYCS Lexington south", "Template:NYCS 63rd IND", "Template:Reflist", "Template:Cite news", "Template:Webarchive", "Template:Use mdy dates", "Template:Main", "Template:Col-break", "Template:Cite enc-nyc" ]
https://en.wikipedia.org/wiki/East_River
9,593
Existentialism
Existentialism is a form of philosophical inquiry that explores the issue of human existence. Existentialist philosophers explore questions related to the meaning, purpose, and value of human existence. Common concepts in existentialist thought include existential crisis, dread, and anxiety in the face of an absurd world (see: human free will), as well as authenticity, courage, and virtue. Existentialism is associated with several 19th- and 20th-century European philosophers who shared an emphasis on the human subject, despite often profound differences in thought. Among the earliest figures associated with existentialism are philosophers Søren Kierkegaard, Friedrich Nietzsche and novelist Fyodor Dostoevsky, all of whom critiqued rationalism and concerned themselves with the problem of meaning. In the 20th century, prominent existentialist thinkers included Jean-Paul Sartre, Albert Camus, Martin Heidegger, Simone de Beauvoir, Karl Jaspers, Gabriel Marcel, and Paul Tillich. Many existentialists considered traditional systematic or academic philosophies, in style and content, to be too abstract and removed from concrete human experience. A primary virtue in existentialist thought is authenticity. Existentialism would influence many disciplines outside of philosophy, including theology, drama, art, literature, and psychology. Existentialist philosophy encompasses a range of perspectives, but it shares certain underlying concepts. Among these, a central tenet of existentialism is that personal freedom, individual responsibility, and deliberate choice are essential to the pursuit of self-discovery and the determination of life's meaning. The term existentialism (French: L'existentialisme) was coined by the French Catholic philosopher Gabriel Marcel in the mid-1940s. When Marcel first applied the term to Jean-Paul Sartre, at a colloquium in 1945, Sartre rejected it. Sartre subsequently changed his mind and, on October 29, 1945, publicly adopted the existentialist label in a lecture to the Club Maintenant in Paris, published as L'existentialisme est un humanisme (Existentialism Is a Humanism), a short book that helped popularize existentialist thought. Marcel later came to reject the label himself in favour of Neo-Socratic, in honor of Kierkegaard's essay "On the Concept of Irony". Some scholars argue that the term should be used to refer only to the cultural movement in Europe in the 1940s and 1950s associated with the works of the philosophers Sartre, Simone de Beauvoir, Maurice Merleau-Ponty, and Albert Camus. Others extend the term to Kierkegaard, and yet others extend it as far back as Socrates. However, it is often identified with the philosophical views of Sartre. The labels existentialism and existentialist are often seen as historical conveniences in as much as they were first applied to many philosophers long after they had died. While existentialism is generally considered to have originated with Kierkegaard, the first prominent existentialist philosopher to adopt the term as a self-description was Sartre. Sartre posits the idea that "what all existentialists have in common is the fundamental doctrine that existence precedes essence", as the philosopher Frederick Copleston explains. According to philosopher Steven Crowell, defining existentialism has been relatively difficult, and he argues that it is better understood as a general approach used to reject certain systematic philosophies rather than as a systematic philosophy itself. In a lecture delivered in 1945, Sartre described existentialism as "the attempt to draw all the consequences from a position of consistent atheism". For others, existentialism need not involve the rejection of God, but rather "examines mortal man's search for meaning in a meaningless universe", considering less "What is the good life?" (to feel, be, or do, good), instead asking "What is life good for?". Although many outside Scandinavia consider the term existentialism to have originated from Kierkegaard, it is more likely that Kierkegaard adopted this term (or at least the term "existential" as a description of his philosophy) from the Norwegian poet and literary critic Johan Sebastian Cammermeyer Welhaven. This assertion comes from two sources: Sartre argued that a central proposition of existentialism is that existence precedes essence, which is to say that individuals shape themselves by existing and cannot be perceived through preconceived and a priori categories, an "essence". The actual life of the individual is what constitutes what could be called their "true essence" instead of an arbitrarily attributed essence others use to define them. Human beings, through their own consciousness, create their own values and determine a meaning to their life. This view is in contradiction to Aristotle and Aquinas, who taught that essence precedes individual existence. Although it was Sartre who explicitly coined the phrase, similar notions can be found in the thought of existentialist philosophers such as Heidegger, and Kierkegaard: The subjective thinker's form, the form of his communication, is his style. His form must be just as manifold as are the opposites that he holds together. The systematic eins, zwei, drei is an abstract form that also must inevitably run into trouble whenever it is to be applied to the concrete. To the same degree as the subjective thinker is concrete, to that same degree his form must also be concretely dialectical. But just as he himself is not a poet, not an ethicist, not a dialectician, so also his form is none of these directly. His form must first and last be related to existence, and in this regard he must have at his disposal the poetic, the ethical, the dialectical, the religious. Subordinate character, setting, etc., which belong to the well-balanced character of the esthetic production, are in themselves breadth; the subjective thinker has only one setting—existence—and has nothing to do with localities and such things. The setting is not the fairyland of the imagination, where poetry produces consummation, nor is the setting laid in England, and historical accuracy is not a concern. The setting is inwardness in existing as a human being; the concretion is the relation of the existence-categories to one another. Historical accuracy and historical actuality are breadth. Some interpret the imperative to define oneself as meaning that anyone can wish to be anything. However, an existentialist philosopher would say such a wish constitutes an inauthentic existence – what Sartre would call "bad faith". Instead, the phrase should be taken to say that people are defined only insofar as they act and that they are responsible for their actions. Someone who acts cruelly towards other people is, by that act, defined as a cruel person. Such persons are themselves responsible for their new identity (cruel persons). This is opposed to their genes, or human nature, bearing the blame. As Sartre said in his lecture Existentialism is a Humanism: "Man first of all exists, encounters himself, surges up in the world—and defines himself afterwards." The more positive, therapeutic aspect of this is also implied: a person can choose to act in a different way, and to be a good person instead of a cruel person. Jonathan Webber interprets Sartre's usage of the term essence not in a modal fashion, i.e. as necessary features, but in a teleological fashion: "an essence is the relational property of having a set of parts ordered in such a way as to collectively perform some activity". For example, it belongs to the essence of a house to keep the bad weather out, which is why it has walls and a roof. Humans are different from houses because—unlike houses—they do not have an inbuilt purpose: they are free to choose their own purpose and thereby shape their essence; thus, their existence precedes their essence. Sartre is committed to a radical conception of freedom: nothing fixes our purpose but we ourselves, our projects have no weight or inertia except for our endorsement of them. Simone de Beauvoir, on the other hand, holds that there are various factors, grouped together under the term sedimentation, that offer resistance to attempts to change our direction in life. Sedimentations are themselves products of past choices and can be changed by choosing differently in the present, but such changes happen slowly. They are a force of inertia that shapes the agent's evaluative outlook on the world until the transition is complete. Sartre's definition of existentialism was based on Heidegger's magnum opus Being and Time (1927). In the correspondence with Jean Beaufret later published as the Letter on Humanism, Heidegger implied that Sartre misunderstood him for his own purposes of subjectivism, and that he did not mean that actions take precedence over being so long as those actions were not reflected upon. Heidegger commented that "the reversal of a metaphysical statement remains a metaphysical statement", meaning that he thought Sartre had simply switched the roles traditionally attributed to essence and existence without interrogating these concepts and their history. The notion of the absurd contains the idea that there is no meaning in the world beyond what meaning we give it. This meaninglessness also encompasses the amorality or "unfairness" of the world. This can be highlighted in the way it opposes the traditional Abrahamic religious perspective, which establishes that life's purpose is the fulfillment of God's commandments. This is what gives meaning to people's lives. To live the life of the absurd means rejecting a life that finds or pursues specific meaning for man's existence since there is nothing to be discovered. According to Albert Camus, the world or the human being is not in itself absurd. The concept only emerges through the juxtaposition of the two; life becomes absurd due to the incompatibility between human beings and the world they inhabit. This view constitutes one of the two interpretations of the absurd in existentialist literature. The second view, first elaborated by Søren Kierkegaard, holds that absurdity is limited to actions and choices of human beings. These are considered absurd since they issue from human freedom, undermining their foundation outside of themselves. The absurd contrasts with the claim that "bad things don't happen to good people"; to the world, metaphorically speaking, there is no such thing as a good person or a bad person; what happens happens, and it may just as well happen to a "good" person as to a "bad" person. Because of the world's absurdity, anything can happen to anyone at any time and a tragic event could plummet someone into direct confrontation with the absurd. Many of the literary works of Kierkegaard, Beckett, Kafka, Dostoevsky, Ionesco, Miguel de Unamuno, Luigi Pirandello, Sartre, Joseph Heller, and Camus contain descriptions of people who encounter the absurdity of the world. It is because of the devastating awareness of meaninglessness that Camus claimed in The Myth of Sisyphus that "There is only one truly serious philosophical problem, and that is suicide." Although "prescriptions" against the possible deleterious consequences of these kinds of encounters vary, from Kierkegaard's religious "stage" to Camus' insistence on persevering in spite of absurdity, the concern with helping people avoid living their lives in ways that put them in the perpetual danger of having everything meaningful break down is common to most existentialist philosophers. The possibility of having everything meaningful break down poses a threat of quietism, which is inherently against the existentialist philosophy. It has been said that the possibility of suicide makes all humans existentialists. The ultimate hero of absurdism lives without meaning and faces suicide without succumbing to it. Facticity is defined by Sartre in Being and Nothingness (1943) as the in-itself, which for humans takes the form of being and not being. It is the facts of one's personal life and as per Heidegger, it is "the way in which we are thrown into the world." This can be more easily understood when considering facticity in relation to the temporal dimension of our past: one's past is what one is, meaning that it is what has formed the person who exists in the present. However, to say that one is only one's past would ignore the change a person undergoes in the present and future, while saying that one's past is only what one was, would entirely detach it from the present self. A denial of one's concrete past constitutes an inauthentic lifestyle, and also applies to other kinds of facticity (having a human body—e.g., one that does not allow a person to run faster than the speed of sound—identity, values, etc.). Facticity is a limitation and a condition of freedom. It is a limitation in that a large part of one's facticity consists of things one did not choose (birthplace, etc.), but a condition of freedom in the sense that one's values most likely depend on it. However, even though one's facticity is "set in stone" (as being past, for instance), it cannot determine a person: the value ascribed to one's facticity is still ascribed to it freely by that person. As an example, consider two men, one of whom has no memory of his past and the other who remembers everything. Both have committed many crimes, but the first man, remembering nothing, leads a rather normal life while the second man, feeling trapped by his own past, continues a life of crime, blaming his own past for "trapping" him in this life. There is nothing essential about his committing crimes, but he ascribes this meaning to his past. However, to disregard one's facticity during the continual process of self-making, projecting oneself into the future, would be to put oneself in denial of the conditions shaping the present self and would be inauthentic. The origin of one's projection must still be one's facticity, though in the mode of not being it (essentially). An example of one focusing solely on possible projects without reflecting on one's current facticity: would be someone who continually thinks about future possibilities related to being rich (e.g. a better car, bigger house, better quality of life, etc.) without acknowledging the facticity of not currently having the financial means to do so. In this example, considering both facticity and transcendence, an authentic mode of being would be considering future projects that might improve one's current finances (e.g. putting in extra hours, or investing savings) in order to arrive at a future-facticity of a modest pay rise, further leading to purchase of an affordable car. Another aspect of facticity is that it entails angst. Freedom "produces" angst when limited by facticity and the lack of the possibility of having facticity to "step in" and take responsibility for something one has done also produces angst. Another aspect of existential freedom is that one can change one's values. One is responsible for one's values, regardless of society's values. The focus on freedom in existentialism is related to the limits of responsibility one bears, as a result of one's freedom. The relationship between freedom and responsibility is one of interdependency and a clarification of freedom also clarifies that for which one is responsible. Many noted existentialists consider the theme of authentic existence important. Authenticity involves the idea that one has to "create oneself" and live in accordance with this self. For an authentic existence, one should act as oneself, not as "one's acts" or as "one's genes" or as any other essence requires. The authentic act is one in accordance with one's freedom. A component of freedom is facticity, but not to the degree that this facticity determines one's transcendent choices (one could then blame one's background for making the choice one made [chosen project, from one's transcendence]). Facticity, in relation to authenticity, involves acting on one's actual values when making a choice (instead of, like Kierkegaard's Aesthete, "choosing" randomly), so that one takes responsibility for the act instead of choosing either-or without allowing the options to have different values. In contrast, the inauthentic is the denial to live in accordance with one's freedom. This can take many forms, from pretending choices are meaningless or random, convincing oneself that some form of determinism is true, or "mimicry" where one acts as "one should". How one "should" act is often determined by an image one has, of how one in such a role (bank manager, lion tamer, sex worker, etc.) acts. In Being and Nothingness, Sartre uses the example of a waiter in "bad faith". He merely takes part in the "act" of being a typical waiter, albeit very convincingly. This image usually corresponds to a social norm, but this does not mean that all acting in accordance with social norms is inauthentic. The main point is the attitude one takes to one's own freedom and responsibility and the extent to which one acts in accordance with this freedom. The Other (written with a capital "O") is a concept more properly belonging to phenomenology and its account of intersubjectivity. However, it has seen widespread use in existentialist writings, and the conclusions drawn differ slightly from the phenomenological accounts. The Other is the experience of another free subject who inhabits the same world as a person does. In its most basic form, it is this experience of the Other that constitutes intersubjectivity and objectivity. To clarify, when one experiences someone else, and this Other person experiences the world (the same world that a person experiences)—only from "over there"—the world is constituted as objective in that it is something that is "there" as identical for both of the subjects; a person experiences the other person as experiencing the same things. This experience of the Other's look is what is termed the Look (sometimes the Gaze). While this experience, in its basic phenomenological sense, constitutes the world as objective and oneself as objectively existing subjectivity (one experiences oneself as seen in the Other's Look in precisely the same way that one experiences the Other as seen by him, as subjectivity), in existentialism, it also acts as a kind of limitation of freedom. This is because the Look tends to objectify what it sees. When one experiences oneself in the Look, one does not experience oneself as nothing (no thing), but as something (some thing). In Sartre's example of a man peeping at someone through a keyhole, the man is entirely caught up in the situation he is in. He is in a pre-reflexive state where his entire consciousness is directed at what goes on in the room. Suddenly, he hears a creaking floorboard behind him and he becomes aware of himself as seen by the Other. He is then filled with shame for he perceives himself as he would perceive someone else doing what he was doing—as a Peeping Tom. For Sartre, this phenomenological experience of shame establishes proof for the existence of other minds and defeats the problem of solipsism. For the conscious state of shame to be experienced, one has to become aware of oneself as an object of another look, proving a priori, that other minds exist. The Look is then co-constitutive of one's facticity. Another characteristic feature of the Look is that no Other really needs to have been there: It is possible that the creaking floorboard was simply the movement of an old house; the Look is not some kind of mystical telepathic experience of the actual way the Other sees one (there may have been someone there, but he could have not noticed that person). It is only one's perception of the way another might perceive him. "Existential angst", sometimes called existential dread, anxiety, or anguish, is a term common to many existentialist thinkers. It is generally held to be a negative feeling arising from the experience of human freedom and responsibility. The archetypal example is the experience one has when standing on a cliff where one not only fears falling off it, but also dreads the possibility of throwing oneself off. In this experience that "nothing is holding me back", one senses the lack of anything that predetermines one to either throw oneself off or to stand still, and one experiences one's own freedom. It can also be seen in relation to the previous point how angst is before nothing, and this is what sets it apart from fear that has an object. While one can take measures to remove an object of fear, for angst no such "constructive" measures are possible. The use of the word "nothing" in this context relates to the inherent insecurity about the consequences of one's actions and to the fact that, in experiencing freedom as angst, one also realizes that one is fully responsible for these consequences. There is nothing in people (genetically, for instance) that acts in their stead—that they can blame if something goes wrong. Therefore, not every choice is perceived as having dreadful possible consequences (and, it can be claimed, human lives would be unbearable if every choice facilitated dread). However, this does not change the fact that freedom remains a condition of every action. Despair is generally defined as a loss of hope. In existentialism, it is more specifically a loss of hope in reaction to a breakdown in one or more of the defining qualities of one's self or identity. If a person is invested in being a particular thing, such as a bus driver or an upstanding citizen, and then finds their being-thing compromised, they would normally be found in a state of despair—a hopeless state. For example, a singer who loses the ability to sing may despair if they have nothing else to fall back on—nothing to rely on for their identity. They find themselves unable to be what defined their being. What sets the existentialist notion of despair apart from the conventional definition is that existentialist despair is a state one is in even when they are not overtly in despair. So long as a person's identity depends on qualities that can crumble, they are in perpetual despair—and as there is, in Sartrean terms, no human essence found in conventional reality on which to constitute the individual's sense of identity, despair is a universal human condition. As Kierkegaard defines it in Either/Or: "Let each one learn what he can; both of us can learn that a person's unhappiness never lies in his lack of control over external conditions, since this would only make him completely unhappy." In Works of Love, he says: When the God-forsaken worldliness of earthly life shuts itself in complacency, the confined air develops poison, the moment gets stuck and stands still, the prospect is lost, a need is felt for a refreshing, enlivening breeze to cleanse the air and dispel the poisonous vapors lest we suffocate in worldliness. ... Lovingly to hope all things is the opposite of despairingly to hope nothing at all. Love hopes all things—yet is never put to shame. To relate oneself expectantly to the possibility of the good is to hope. To relate oneself expectantly to the possibility of evil is to fear. By the decision to choose hope one decides infinitely more than it seems, because it is an eternal decision. Existentialists oppose defining human beings as primarily rational, and, therefore, oppose both positivism and rationalism. Existentialism asserts that people make decisions based on subjective meaning rather than pure rationality. The rejection of reason as the source of meaning is a common theme of existentialist thought, as is the focus on the anxiety and dread that we feel in the face of our own radical free will and our awareness of death. Kierkegaard advocated rationality as a means to interact with the objective world (e.g., in the natural sciences), but when it comes to existential problems, reason is insufficient: "Human reason has boundaries". Like Kierkegaard, Sartre saw problems with rationality, calling it a form of "bad faith", an attempt by the self to impose structure on a world of phenomena—"the Other"—that is fundamentally irrational and random. According to Sartre, rationality and other forms of bad faith hinder people from finding meaning in freedom. To try to suppress feelings of anxiety and dread, people confine themselves within everyday experience, Sartre asserted, thereby relinquishing their freedom and acquiescing to being possessed in one form or another by "the Look" of "the Other" (i.e., possessed by another person—or at least one's idea of that other person). An existentialist reading of the Bible would demand that the reader recognize that they are an existing subject studying the words more as a recollection of events. This is in contrast to looking at a collection of "truths" that are outside and unrelated to the reader, but may develop a sense of reality/God. Such a reader is not obligated to follow the commandments as if an external agent is forcing these commandments upon them, but as though they are inside them and guiding them from inside. This is the task Kierkegaard takes up when he asks: "Who has the more difficult task: the teacher who lectures on earnest things a meteor's distance from everyday life—or the learner who should put it to use?" Philosophers such as Similarly, Hans Jonas and Rudolph Bultmann introduced the concept of existentialist demythologization into the field of Early Christianity and Christian Theology, respectively. Although nihilism and existentialism are distinct philosophies, they are often confused with one another since both are rooted in the human experience of anguish and confusion that stems from the apparent meaninglessness of a world in which humans are compelled to find or create meaning. A primary cause of confusion is that Friedrich Nietzsche was an important philosopher in both fields. Existentialist philosophers often stress the importance of angst as signifying the absolute lack of any objective ground for action, a move that is often reduced to moral or existential nihilism. A pervasive theme in existentialist philosophy, however, is to persist through encounters with the absurd, as seen in Camus's The Myth of Sisyphus ("One must imagine Sisyphus happy.") and it is only very rarely that existentialist philosophers dismiss morality or one's self-created meaning: Kierkegaard regained a sort of morality in the religious (although he would not agree that it was ethical; the religious suspends the ethical), and Sartre's final words in Being and Nothingness are: "All these questions, which refer us to a pure and not an accessory (or impure) reflection, can find their reply only on the ethical plane. We shall devote to them a future work." Some have argued that existentialism has long been an element of European religious thought, even before the term came into use. William Barrett identified Blaise Pascal and Søren Kierkegaard as two specific examples. Jean Wahl also identified William Shakespeare's Prince Hamlet ("To be, or not to be"), Jules Lequier, Thomas Carlyle and William James as existentialists. According to Wahl, "the origins of most great philosophies, like those of Plato, Descartes, and Kant, are to be found in existential reflections." Precursors to Existentialism can also be identified in the works of Iranian Islamic philosopher Mulla Sadra (c. 1571 - 1635) who would posit that "existence precedes essence" becoming the principle expositor of the School of Isfahan which is described as 'alive and active'. Kierkegaard is generally considered to have been the first existentialist philosopher. He proposed that each individual—not reason, society, or religious orthodoxy—is solely tasked with giving meaning to life and living it sincerely, or "authentically". Kierkegaard and Nietzsche were two of the first philosophers considered fundamental to the existentialist movement, though neither used the term "existentialism" and it is unclear whether they would have supported the existentialism of the 20th century. They focused on subjective human experience rather than the objective truths of mathematics and science, which they believed were too detached or observational to truly get at the human experience. Like Pascal, they were interested in people's quiet struggle with the apparent meaninglessness of life and the use of diversion to escape from boredom. Unlike Pascal, Kierkegaard and Nietzsche also considered the role of making free choices, particularly regarding fundamental values and beliefs, and how such choices change the nature and identity of the chooser. Kierkegaard's knight of faith and Nietzsche's Übermensch are representative of people who exhibit freedom, in that they define the nature of their own existence. Nietzsche's idealized individual invents his own values and creates the very terms they excel under. By contrast, Kierkegaard, opposed to the level of abstraction in Hegel, and not nearly as hostile (actually welcoming) to Christianity as Nietzsche, argues through a pseudonym that the objective certainty of religious truths (specifically Christian) is not only impossible, but even founded on logical paradoxes. Yet he continues to imply that a leap of faith is a possible means for an individual to reach a higher stage of existence that transcends and contains both an aesthetic and ethical value of life. Kierkegaard and Nietzsche were also precursors to other intellectual movements, including postmodernism, and various strands of psychotherapy. However, Kierkegaard believed that individuals should live in accordance with their thinking. The first important literary author also important to existentialism was the Russian, Dostoyevsky. Dostoyevsky's Notes from Underground portrays a man unable to fit into society and unhappy with the identities he creates for himself. Sartre, in his book on existentialism Existentialism is a Humanism, quoted Dostoyevsky's The Brothers Karamazov as an example of existential crisis. Other Dostoyevsky novels covered issues raised in existentialist philosophy while presenting story lines divergent from secular existentialism: for example, in Crime and Punishment, the protagonist Raskolnikov experiences an existential crisis and then moves toward a Christian Orthodox worldview similar to that advocated by Dostoyevsky himself. In the first decades of the 20th century, a number of philosophers and writers explored existentialist ideas. The Spanish philosopher Miguel de Unamuno y Jugo, in his 1913 book The Tragic Sense of Life in Men and Nations, emphasized the life of "flesh and bone" as opposed to that of abstract rationalism. Unamuno rejected systematic philosophy in favor of the individual's quest for faith. He retained a sense of the tragic, even absurd nature of the quest, symbolized by his enduring interest in the eponymous character from the Miguel de Cervantes novel Don Quixote. A novelist, poet and dramatist as well as philosophy professor at the University of Salamanca, Unamuno wrote a short story about a priest's crisis of faith, Saint Manuel the Good, Martyr, which has been collected in anthologies of existentialist fiction. Another Spanish thinker, José Ortega y Gasset, writing in 1914, held that human existence must always be defined as the individual person combined with the concrete circumstances of his life: "Yo soy yo y mi circunstancia" ("I am myself and my circumstances"). Sartre likewise believed that human existence is not an abstract matter, but is always situated ("en situation"). Although Martin Buber wrote his major philosophical works in German, and studied and taught at the Universities of Berlin and Frankfurt, he stands apart from the mainstream of German philosophy. Born into a Jewish family in Vienna in 1878, he was also a scholar of Jewish culture and involved at various times in Zionism and Hasidism. In 1938, he moved permanently to Jerusalem. His best-known philosophical work was the short book I and Thou, published in 1922. For Buber, the fundamental fact of human existence, too readily overlooked by scientific rationalism and abstract philosophical thought, is "man with man", a dialogue that takes place in the so-called "sphere of between" ("das Zwischenmenschliche"). Two Russian philosophers, Lev Shestov and Nikolai Berdyaev, became well known as existentialist thinkers during their post-Revolutionary exiles in Paris. Shestov had launched an attack on rationalism and systematization in philosophy as early as 1905 in his book of aphorisms All Things Are Possible. Berdyaev drew a radical distinction between the world of spirit and the everyday world of objects. Human freedom, for Berdyaev, is rooted in the realm of spirit, a realm independent of scientific notions of causation. To the extent the individual human being lives in the objective world, he is estranged from authentic spiritual freedom. "Man" is not to be interpreted naturalistically, but as a being created in God's image, an originator of free, creative acts. He published a major work on these themes, The Destiny of Man, in 1931. Gabriel Marcel, long before coining the term "existentialism", introduced important existentialist themes to a French audience in his early essay "Existence and Objectivity" (1925) and in his Metaphysical Journal (1927). A dramatist as well as a philosopher, Marcel found his philosophical starting point in a condition of metaphysical alienation: the human individual searching for harmony in a transient life. Harmony, for Marcel, was to be sought through "secondary reflection", a "dialogical" rather than "dialectical" approach to the world, characterized by "wonder and astonishment" and open to the "presence" of other people and of God rather than merely to "information" about them. For Marcel, such presence implied more than simply being there (as one thing might be in the presence of another thing); it connoted "extravagant" availability, and the willingness to put oneself at the disposal of the other. Marcel contrasted secondary reflection with abstract, scientific-technical primary reflection, which he associated with the activity of the abstract Cartesian ego. For Marcel, philosophy was a concrete activity undertaken by a sensing, feeling human being incarnate—embodied—in a concrete world. Although Sartre adopted the term "existentialism" for his own philosophy in the 1940s, Marcel's thought has been described as "almost diametrically opposed" to that of Sartre. Unlike Sartre, Marcel was a Christian, and became a Catholic convert in 1929. In Germany, the psychologist and philosopher Karl Jaspers—who later described existentialism as a "phantom" created by the public—called his own thought, heavily influenced by Kierkegaard and Nietzsche, Existenzphilosophie. For Jaspers, "Existenz-philosophy is the way of thought by means of which man seeks to become himself...This way of thought does not cognize objects, but elucidates and makes actual the being of the thinker". Jaspers, a professor at the university of Heidelberg, was acquainted with Heidegger, who held a professorship at Marburg before acceding to Husserl's chair at Freiburg in 1928. They held many philosophical discussions, but later became estranged over Heidegger's support of National Socialism. They shared an admiration for Kierkegaard, and in the 1930s, Heidegger lectured extensively on Nietzsche. Nevertheless, the extent to which Heidegger should be considered an existentialist is debatable. In Being and Time he presented a method of rooting philosophical explanations in human existence (Dasein) to be analysed in terms of existential categories (existentiale); and this has led many commentators to treat him as an important figure in the existentialist movement. Following the Second World War, existentialism became a well-known and significant philosophical and cultural movement, mainly through the public prominence of two French writers, Jean-Paul Sartre and Albert Camus, who wrote best-selling novels, plays and widely read journalism as well as theoretical texts. These years also saw the growing reputation of Being and Time outside Germany. Sartre dealt with existentialist themes in his 1938 novel Nausea and the short stories in his 1939 collection The Wall, and had published his treatise on existentialism, Being and Nothingness, in 1943, but it was in the two years following the liberation of Paris from the German occupying forces that he and his close associates—Camus, Simone de Beauvoir, Maurice Merleau-Ponty, and others—became internationally famous as the leading figures of a movement known as existentialism. In a very short period of time, Camus and Sartre in particular became the leading public intellectuals of post-war France, achieving by the end of 1945 "a fame that reached across all audiences." Camus was an editor of the most popular leftist (former French Resistance) newspaper Combat; Sartre launched his journal of leftist thought, Les Temps Modernes, and two weeks later gave the widely reported lecture on existentialism and secular humanism to a packed meeting of the Club Maintenant. Beauvoir wrote that "not a week passed without the newspapers discussing us"; existentialism became "the first media craze of the postwar era." By the end of 1947, Camus' earlier fiction and plays had been reprinted, his new play Caligula had been performed and his novel The Plague published; the first two novels of Sartre's The Roads to Freedom trilogy had appeared, as had Beauvoir's novel The Blood of Others. Works by Camus and Sartre were already appearing in foreign editions. The Paris-based existentialists had become famous. Sartre had traveled to Germany in 1930 to study the phenomenology of Edmund Husserl and Martin Heidegger, and he included critical comments on their work in his major treatise Being and Nothingness. Heidegger's thought had also become known in French philosophical circles through its use by Alexandre Kojève in explicating Hegel in a series of lectures given in Paris in the 1930s. The lectures were highly influential; members of the audience included not only Sartre and Merleau-Ponty, but Raymond Queneau, Georges Bataille, Louis Althusser, André Breton, and Jacques Lacan. A selection from Being and Time was published in French in 1938, and his essays began to appear in French philosophy journals. Heidegger read Sartre's work and was initially impressed, commenting: "Here for the first time I encountered an independent thinker who, from the foundations up, has experienced the area out of which I think. Your work shows such an immediate comprehension of my philosophy as I have never before encountered." Later, however, in response to a question posed by his French follower Jean Beaufret, Heidegger distanced himself from Sartre's position and existentialism in general in his Letter on Humanism. Heidegger's reputation continued to grow in France during the 1950s and 1960s. In the 1960s, Sartre attempted to reconcile existentialism and Marxism in his work Critique of Dialectical Reason. A major theme throughout his writings was freedom and responsibility. Camus was a friend of Sartre, until their falling-out, and wrote several works with existential themes including The Rebel, Summer in Algiers, The Myth of Sisyphus, and The Stranger, the latter being "considered—to what would have been Camus's irritation—the exemplary existentialist novel." Camus, like many others, rejected the existentialist label, and considered his works concerned with facing the absurd. In the titular book, Camus uses the analogy of the Greek myth of Sisyphus to demonstrate the futility of existence. In the myth, Sisyphus is condemned for eternity to roll a rock up a hill, but when he reaches the summit, the rock will roll to the bottom again. Camus believes that this existence is pointless but that Sisyphus ultimately finds meaning and purpose in his task, simply by continually applying himself to it. The first half of the book contains an extended rebuttal of what Camus took to be existentialist philosophy in the works of Kierkegaard, Shestov, Heidegger, and Jaspers. Simone de Beauvoir, an important existentialist who spent much of her life as Sartre's partner, wrote about feminist and existentialist ethics in her works, including The Second Sex and The Ethics of Ambiguity. Although often overlooked due to her relationship with Sartre, de Beauvoir integrated existentialism with other forms of thinking such as feminism, unheard of at the time, resulting in alienation from fellow writers such as Camus. Paul Tillich, an important existentialist theologian following Kierkegaard and Karl Barth, applied existentialist concepts to Christian theology, and helped introduce existential theology to the general public. His seminal work The Courage to Be follows Kierkegaard's analysis of anxiety and life's absurdity, but puts forward the thesis that modern humans must, via God, achieve selfhood in spite of life's absurdity. Rudolf Bultmann used Kierkegaard's and Heidegger's philosophy of existence to demythologize Christianity by interpreting Christian mythical concepts into existentialist concepts. Maurice Merleau-Ponty, an existential phenomenologist, was for a time a companion of Sartre. Merleau-Ponty's Phenomenology of Perception (1945) was recognized as a major statement of French existentialism. It has been said that Merleau-Ponty's work Humanism and Terror greatly influenced Sartre. However, in later years they were to disagree irreparably, dividing many existentialists such as de Beauvoir, who sided with Sartre. Colin Wilson, an English writer, published his study The Outsider in 1956, initially to critical acclaim. In this book and others (e.g. Introduction to the New Existentialism), he attempted to reinvigorate what he perceived as a pessimistic philosophy and bring it to a wider audience. He was not, however, academically trained, and his work was attacked by professional philosophers for lack of rigor and critical standards. Stanley Kubrick's 1957 anti-war film Paths of Glory "illustrates, and even illuminates...existentialism" by examining the "necessary absurdity of the human condition" and the "horror of war". The film tells the story of a fictional World War I French army regiment ordered to attack an impregnable German stronghold; when the attack fails, three soldiers are chosen at random, court-martialed by a "kangaroo court", and executed by firing squad. The film examines existentialist ethics, such as the issue of whether objectivity is possible and the "problem of authenticity". Orson Welles's 1962 film The Trial, based upon Franz Kafka's book of the same name (Der Prozeß), is characteristic of both existentialist and absurdist themes in its depiction of a man (Joseph K.) arrested for a crime for which the charges are neither revealed to him nor to the reader. Neon Genesis Evangelion is a Japanese science fiction animation series created by the anime studio Gainax and was both directed and written by Hideaki Anno. Existential themes of individuality, consciousness, freedom, choice, and responsibility are heavily relied upon throughout the entire series, particularly through the philosophies of Jean-Paul Sartre and Søren Kierkegaard. Episode 16's title, "The Sickness Unto Death, And..." (死に至る病、そして, Shi ni itaru yamai, soshite) is a reference to Kierkegaard's book, The Sickness Unto Death. Some contemporary films dealing with existentialist issues include Melancholia, Fight Club, I Heart Huckabees, Waking Life, The Matrix, Ordinary People, Life in a Day, Barbie, and Everything Everywhere All at Once. Likewise, films throughout the 20th century such as The Seventh Seal, Ikiru, Taxi Driver, the Toy Story films, The Great Silence, Ghost in the Shell, Harold and Maude, High Noon, Easy Rider, One Flew Over the Cuckoo's Nest, A Clockwork Orange, Groundhog Day, Apocalypse Now, Badlands, and Blade Runner also have existentialist qualities. Notable directors known for their existentialist films include Ingmar Bergman, Bela Tarr, Robert Bresson, Jean-Pierre Melville, François Truffaut, Jean-Luc Godard, Michelangelo Antonioni, Akira Kurosawa, Terrence Malick, Stanley Kubrick, Andrei Tarkovsky, Hideaki Anno, Wes Anderson, Gaspar Noé, Woody Allen, and Christopher Nolan. Charlie Kaufman's Synecdoche, New York focuses on the protagonist's desire to find existential meaning. Similarly, in Kurosawa's Red Beard, the protagonist's experiences as an intern in a rural health clinic in Japan lead him to an existential crisis whereby he questions his reason for being. This, in turn, leads him to a better understanding of humanity. The French film, Mood Indigo (directed by Michel Gondry) embraced various elements of existentialism. The film The Shawshank Redemption, released in 1994, depicts life in a prison in Maine, United States to explore several existentialist concepts. Existential perspectives are also found in modern literature to varying degrees, especially since the 1920s. Louis-Ferdinand Céline's Journey to the End of the Night (Voyage au bout de la nuit, 1932) celebrated by both Sartre and Beauvoir, contained many of the themes that would be found in later existential literature, and is in some ways, the proto-existential novel. Jean-Paul Sartre's 1938 novel Nausea was "steeped in Existential ideas", and is considered an accessible way of grasping his philosophical stance. Between 1900 and 1960, other authors such as Albert Camus, Franz Kafka, Rainer Maria Rilke, T. S. Eliot, Hermann Hesse, Luigi Pirandello, Ralph Ellison, and Jack Kerouac composed literature or poetry that contained, to varying degrees, elements of existential or proto-existential thought. The philosophy's influence even reached pulp literature shortly after the turn of the 20th century, as seen in the existential disparity witnessed in Man's lack of control of his fate in the works of H. P. Lovecraft. Sartre wrote No Exit in 1944, an existentialist play originally published in French as Huis Clos (meaning In Camera or "behind closed doors"), which is the source of the popular quote, "Hell is other people." (In French, "L'enfer, c'est les autres"). The play begins with a Valet leading a man into a room that the audience soon realizes is in hell. Eventually he is joined by two women. After their entry, the Valet leaves and the door is shut and locked. All three expect to be tortured, but no torturer arrives. Instead, they realize they are there to torture each other, which they do effectively by probing each other's sins, desires, and unpleasant memories. Existentialist themes are displayed in the Theatre of the Absurd, notably in Samuel Beckett's Waiting for Godot, in which two men divert themselves while they wait expectantly for someone (or something) named Godot who never arrives. They claim Godot is an acquaintance, but in fact, hardly know him, admitting they would not recognize him if they saw him. Samuel Beckett, once asked who or what Godot is, replied, "If I knew, I would have said so in the play." To occupy themselves, the men eat, sleep, talk, argue, sing, play games, exercise, swap hats, and contemplate suicide—anything "to hold the terrible silence at bay". The play "exploits several archetypal forms and situations, all of which lend themselves to both comedy and pathos." The play also illustrates an attitude toward human experience on earth: the poignancy, oppression, camaraderie, hope, corruption, and bewilderment of human experience that can be reconciled only in the mind and art of the absurdist. The play examines questions such as death, the meaning of human existence and the place of God in human existence. Tom Stoppard's Rosencrantz & Guildenstern Are Dead is an absurdist tragicomedy first staged at the Edinburgh Festival Fringe in 1966. The play expands upon the exploits of two minor characters from Shakespeare's Hamlet. Comparisons have also been drawn to Samuel Beckett's Waiting for Godot, for the presence of two central characters who appear almost as two halves of a single character. Many plot features are similar as well: the characters pass time by playing Questions, impersonating other characters, and interrupting each other or remaining silent for long periods of time. The two characters are portrayed as two clowns or fools in a world beyond their understanding. They stumble through philosophical arguments while not realizing the implications, and muse on the irrationality and randomness of the world. Jean Anouilh's Antigone also presents arguments founded on existentialist ideas. It is a tragedy inspired by Greek mythology and the play of the same name (Antigone, by Sophocles) from the fifth century BC. In English, it is often distinguished from its antecedent by being pronounced in its original French form, approximately "Ante-GŌN." The play was first performed in Paris on 6 February 1944, during the Nazi occupation of France. Produced under Nazi censorship, the play is purposefully ambiguous with regards to the rejection of authority (represented by Antigone) and the acceptance of it (represented by Creon). The parallels to the French Resistance and the Nazi occupation have been drawn. Antigone rejects life as desperately meaningless but without affirmatively choosing a noble death. The crux of the play is the lengthy dialogue concerning the nature of power, fate, and choice, during which Antigone says that she is, "... disgusted with [the]...promise of a humdrum happiness." She states that she would rather die than live a mediocre existence. Critic Martin Esslin in his book Theatre of the Absurd pointed out how many contemporary playwrights such as Samuel Beckett, Eugène Ionesco, Jean Genet, and Arthur Adamov wove into their plays the existentialist belief that we are absurd beings loose in a universe empty of real meaning. Esslin noted that many of these playwrights demonstrated the philosophy better than did the plays by Sartre and Camus. Though most of such playwrights, subsequently labeled "Absurdist" (based on Esslin's book), denied affiliations with existentialism and were often staunchly anti-philosophical (for example Ionesco often claimed he identified more with 'Pataphysics or with Surrealism than with existentialism), the playwrights are often linked to existentialism based on Esslin's observation. Black existentialism explores the existence and experiences of Black people in the world. Classical and contemporary thinkers include C.L.R James, Frederick Douglass, W.E.B DuBois, Frantz Fanon, Angela Davis, Cornell West, Naomi Zack, bell hooks, Stuart Hall, Lewis Gordon, and Audre Lorde. A major offshoot of existentialism as a philosophy is existentialist psychology and psychoanalysis, which first crystallized in the work of Otto Rank, Freud's closest associate for 20 years. Without awareness of the writings of Rank, Ludwig Binswanger was influenced by Freud, Edmund Husserl, Heidegger, and Sartre. A later figure was Viktor Frankl, who briefly met Freud as a young man. His logotherapy can be regarded as a form of existentialist therapy. The existentialists would also influence social psychology, antipositivist micro-sociology, symbolic interactionism, and post-structuralism, with the work of thinkers such as Georg Simmel and Michel Foucault. Foucault was a great reader of Kierkegaard even though he almost never refers to this author, who nonetheless had for him an importance as secret as it was decisive. An early contributor to existentialist psychology in the United States was Rollo May, who was strongly influenced by Kierkegaard and Otto Rank. One of the most prolific writers on techniques and theory of existentialist psychology in the US is Irvin D. Yalom. Yalom states that Aside from their reaction against Freud's mechanistic, deterministic model of the mind and their assumption of a phenomenological approach in therapy, the existentialist analysts have little in common and have never been regarded as a cohesive ideological school. These thinkers—who include Ludwig Binswanger, Medard Boss, Eugène Minkowski, V. E. Gebsattel, Roland Kuhn, G. Caruso, F. T. Buytendijk, G. Bally, and Victor Frankl—were almost entirely unknown to the American psychotherapeutic community until Rollo May's highly influential 1958 book Existence—and especially his introductory essay—introduced their work into this country. A more recent contributor to the development of a European version of existentialist psychotherapy is the British-based Emmy van Deurzen. Anxiety's importance in existentialism makes it a popular topic in psychotherapy. Therapists often offer existentialist philosophy as an explanation for anxiety. The assertion is that anxiety is manifested of an individual's complete freedom to decide, and complete responsibility for the outcome of such decisions. Psychotherapists using an existentialist approach believe that a patient can harness his anxiety and use it constructively. Instead of suppressing anxiety, patients are advised to use it as grounds for change. By embracing anxiety as inevitable, a person can use it to achieve his full potential in life. Humanistic psychology also had major impetus from existentialist psychology and shares many of the fundamental tenets. Terror management theory, based on the writings of Ernest Becker and Otto Rank, is a developing area of study within the academic study of psychology. It looks at what researchers claim are implicit emotional reactions of people confronted with the knowledge that they will eventually die. Also, Gerd B. Achenbach has refreshed the Socratic tradition with his own blend of philosophical counseling; as did Michel Weber with his Chromatiques Center in Belgium. Walter Kaufmann criticized "the profoundly unsound methods and the dangerous contempt for reason that have been so prominent in existentialism." Logical positivist philosophers, such as Rudolf Carnap and A. J. Ayer, assert that existentialists are often confused about the verb "to be" in their analyses of "being". Specifically, they argue that the verb "is" is transitive and pre-fixed to a predicate (e.g., an apple is red) (without a predicate, the word "is" is meaningless), and that existentialists frequently misuse the term in this manner. Wilson has stated in his book The Angry Years that existentialism has created many of its own difficulties: "We can see how this question of freedom of the will has been vitiated by post-romantic philosophy, with its inbuilt tendency to laziness and boredom, we can also see how it came about that existentialism found itself in a hole of its own digging, and how the philosophical developments since then have amounted to walking in circles round that hole." Many critics argue Sartre's philosophy is contradictory. Specifically, they argue that Sartre makes metaphysical arguments despite his claiming that his philosophical views ignore metaphysics. Herbert Marcuse criticized Being and Nothingness for projecting anxiety and meaninglessness onto the nature of existence itself: "Insofar as Existentialism is a philosophical doctrine, it remains an idealistic doctrine: it hypostatizes specific historical conditions of human existence into ontological and metaphysical characteristics. Existentialism thus becomes part of the very ideology which it attacks, and its radicalism is illusory." In Letter on Humanism, Heidegger criticized Sartre's existentialism: Existentialism says existence precedes essence. In this statement he is taking existentia and essentia according to their metaphysical meaning, which, from Plato's time on, has said that essentia precedes existentia. Sartre reverses this statement. But the reversal of a metaphysical statement remains a metaphysical statement. With it, he stays with metaphysics, in oblivion of the truth of Being.
[ { "paragraph_id": 0, "text": "Existentialism is a form of philosophical inquiry that explores the issue of human existence. Existentialist philosophers explore questions related to the meaning, purpose, and value of human existence. Common concepts in existentialist thought include existential crisis, dread, and anxiety in the face of an absurd world (see: human free will), as well as authenticity, courage, and virtue.", "title": "" }, { "paragraph_id": 1, "text": "Existentialism is associated with several 19th- and 20th-century European philosophers who shared an emphasis on the human subject, despite often profound differences in thought. Among the earliest figures associated with existentialism are philosophers Søren Kierkegaard, Friedrich Nietzsche and novelist Fyodor Dostoevsky, all of whom critiqued rationalism and concerned themselves with the problem of meaning. In the 20th century, prominent existentialist thinkers included Jean-Paul Sartre, Albert Camus, Martin Heidegger, Simone de Beauvoir, Karl Jaspers, Gabriel Marcel, and Paul Tillich.", "title": "" }, { "paragraph_id": 2, "text": "Many existentialists considered traditional systematic or academic philosophies, in style and content, to be too abstract and removed from concrete human experience. A primary virtue in existentialist thought is authenticity. Existentialism would influence many disciplines outside of philosophy, including theology, drama, art, literature, and psychology.", "title": "" }, { "paragraph_id": 3, "text": "Existentialist philosophy encompasses a range of perspectives, but it shares certain underlying concepts. Among these, a central tenet of existentialism is that personal freedom, individual responsibility, and deliberate choice are essential to the pursuit of self-discovery and the determination of life's meaning.", "title": "" }, { "paragraph_id": 4, "text": "The term existentialism (French: L'existentialisme) was coined by the French Catholic philosopher Gabriel Marcel in the mid-1940s. When Marcel first applied the term to Jean-Paul Sartre, at a colloquium in 1945, Sartre rejected it. Sartre subsequently changed his mind and, on October 29, 1945, publicly adopted the existentialist label in a lecture to the Club Maintenant in Paris, published as L'existentialisme est un humanisme (Existentialism Is a Humanism), a short book that helped popularize existentialist thought. Marcel later came to reject the label himself in favour of Neo-Socratic, in honor of Kierkegaard's essay \"On the Concept of Irony\".", "title": "Etymology" }, { "paragraph_id": 5, "text": "Some scholars argue that the term should be used to refer only to the cultural movement in Europe in the 1940s and 1950s associated with the works of the philosophers Sartre, Simone de Beauvoir, Maurice Merleau-Ponty, and Albert Camus. Others extend the term to Kierkegaard, and yet others extend it as far back as Socrates. However, it is often identified with the philosophical views of Sartre.", "title": "Etymology" }, { "paragraph_id": 6, "text": "The labels existentialism and existentialist are often seen as historical conveniences in as much as they were first applied to many philosophers long after they had died. While existentialism is generally considered to have originated with Kierkegaard, the first prominent existentialist philosopher to adopt the term as a self-description was Sartre. Sartre posits the idea that \"what all existentialists have in common is the fundamental doctrine that existence precedes essence\", as the philosopher Frederick Copleston explains. According to philosopher Steven Crowell, defining existentialism has been relatively difficult, and he argues that it is better understood as a general approach used to reject certain systematic philosophies rather than as a systematic philosophy itself. In a lecture delivered in 1945, Sartre described existentialism as \"the attempt to draw all the consequences from a position of consistent atheism\". For others, existentialism need not involve the rejection of God, but rather \"examines mortal man's search for meaning in a meaningless universe\", considering less \"What is the good life?\" (to feel, be, or do, good), instead asking \"What is life good for?\".", "title": "Definitional issues and background" }, { "paragraph_id": 7, "text": "Although many outside Scandinavia consider the term existentialism to have originated from Kierkegaard, it is more likely that Kierkegaard adopted this term (or at least the term \"existential\" as a description of his philosophy) from the Norwegian poet and literary critic Johan Sebastian Cammermeyer Welhaven. This assertion comes from two sources:", "title": "Definitional issues and background" }, { "paragraph_id": 8, "text": "Sartre argued that a central proposition of existentialism is that existence precedes essence, which is to say that individuals shape themselves by existing and cannot be perceived through preconceived and a priori categories, an \"essence\". The actual life of the individual is what constitutes what could be called their \"true essence\" instead of an arbitrarily attributed essence others use to define them. Human beings, through their own consciousness, create their own values and determine a meaning to their life. This view is in contradiction to Aristotle and Aquinas, who taught that essence precedes individual existence. Although it was Sartre who explicitly coined the phrase, similar notions can be found in the thought of existentialist philosophers such as Heidegger, and Kierkegaard:", "title": "Concepts" }, { "paragraph_id": 9, "text": "The subjective thinker's form, the form of his communication, is his style. His form must be just as manifold as are the opposites that he holds together. The systematic eins, zwei, drei is an abstract form that also must inevitably run into trouble whenever it is to be applied to the concrete. To the same degree as the subjective thinker is concrete, to that same degree his form must also be concretely dialectical. But just as he himself is not a poet, not an ethicist, not a dialectician, so also his form is none of these directly. His form must first and last be related to existence, and in this regard he must have at his disposal the poetic, the ethical, the dialectical, the religious. Subordinate character, setting, etc., which belong to the well-balanced character of the esthetic production, are in themselves breadth; the subjective thinker has only one setting—existence—and has nothing to do with localities and such things. The setting is not the fairyland of the imagination, where poetry produces consummation, nor is the setting laid in England, and historical accuracy is not a concern. The setting is inwardness in existing as a human being; the concretion is the relation of the existence-categories to one another. Historical accuracy and historical actuality are breadth.", "title": "Concepts" }, { "paragraph_id": 10, "text": "Some interpret the imperative to define oneself as meaning that anyone can wish to be anything. However, an existentialist philosopher would say such a wish constitutes an inauthentic existence – what Sartre would call \"bad faith\". Instead, the phrase should be taken to say that people are defined only insofar as they act and that they are responsible for their actions. Someone who acts cruelly towards other people is, by that act, defined as a cruel person. Such persons are themselves responsible for their new identity (cruel persons). This is opposed to their genes, or human nature, bearing the blame.", "title": "Concepts" }, { "paragraph_id": 11, "text": "As Sartre said in his lecture Existentialism is a Humanism: \"Man first of all exists, encounters himself, surges up in the world—and defines himself afterwards.\" The more positive, therapeutic aspect of this is also implied: a person can choose to act in a different way, and to be a good person instead of a cruel person.", "title": "Concepts" }, { "paragraph_id": 12, "text": "Jonathan Webber interprets Sartre's usage of the term essence not in a modal fashion, i.e. as necessary features, but in a teleological fashion: \"an essence is the relational property of having a set of parts ordered in such a way as to collectively perform some activity\". For example, it belongs to the essence of a house to keep the bad weather out, which is why it has walls and a roof. Humans are different from houses because—unlike houses—they do not have an inbuilt purpose: they are free to choose their own purpose and thereby shape their essence; thus, their existence precedes their essence.", "title": "Concepts" }, { "paragraph_id": 13, "text": "Sartre is committed to a radical conception of freedom: nothing fixes our purpose but we ourselves, our projects have no weight or inertia except for our endorsement of them. Simone de Beauvoir, on the other hand, holds that there are various factors, grouped together under the term sedimentation, that offer resistance to attempts to change our direction in life. Sedimentations are themselves products of past choices and can be changed by choosing differently in the present, but such changes happen slowly. They are a force of inertia that shapes the agent's evaluative outlook on the world until the transition is complete.", "title": "Concepts" }, { "paragraph_id": 14, "text": "Sartre's definition of existentialism was based on Heidegger's magnum opus Being and Time (1927). In the correspondence with Jean Beaufret later published as the Letter on Humanism, Heidegger implied that Sartre misunderstood him for his own purposes of subjectivism, and that he did not mean that actions take precedence over being so long as those actions were not reflected upon. Heidegger commented that \"the reversal of a metaphysical statement remains a metaphysical statement\", meaning that he thought Sartre had simply switched the roles traditionally attributed to essence and existence without interrogating these concepts and their history.", "title": "Concepts" }, { "paragraph_id": 15, "text": "The notion of the absurd contains the idea that there is no meaning in the world beyond what meaning we give it. This meaninglessness also encompasses the amorality or \"unfairness\" of the world. This can be highlighted in the way it opposes the traditional Abrahamic religious perspective, which establishes that life's purpose is the fulfillment of God's commandments. This is what gives meaning to people's lives. To live the life of the absurd means rejecting a life that finds or pursues specific meaning for man's existence since there is nothing to be discovered. According to Albert Camus, the world or the human being is not in itself absurd. The concept only emerges through the juxtaposition of the two; life becomes absurd due to the incompatibility between human beings and the world they inhabit. This view constitutes one of the two interpretations of the absurd in existentialist literature. The second view, first elaborated by Søren Kierkegaard, holds that absurdity is limited to actions and choices of human beings. These are considered absurd since they issue from human freedom, undermining their foundation outside of themselves.", "title": "Concepts" }, { "paragraph_id": 16, "text": "The absurd contrasts with the claim that \"bad things don't happen to good people\"; to the world, metaphorically speaking, there is no such thing as a good person or a bad person; what happens happens, and it may just as well happen to a \"good\" person as to a \"bad\" person. Because of the world's absurdity, anything can happen to anyone at any time and a tragic event could plummet someone into direct confrontation with the absurd. Many of the literary works of Kierkegaard, Beckett, Kafka, Dostoevsky, Ionesco, Miguel de Unamuno, Luigi Pirandello, Sartre, Joseph Heller, and Camus contain descriptions of people who encounter the absurdity of the world.", "title": "Concepts" }, { "paragraph_id": 17, "text": "It is because of the devastating awareness of meaninglessness that Camus claimed in The Myth of Sisyphus that \"There is only one truly serious philosophical problem, and that is suicide.\" Although \"prescriptions\" against the possible deleterious consequences of these kinds of encounters vary, from Kierkegaard's religious \"stage\" to Camus' insistence on persevering in spite of absurdity, the concern with helping people avoid living their lives in ways that put them in the perpetual danger of having everything meaningful break down is common to most existentialist philosophers. The possibility of having everything meaningful break down poses a threat of quietism, which is inherently against the existentialist philosophy. It has been said that the possibility of suicide makes all humans existentialists. The ultimate hero of absurdism lives without meaning and faces suicide without succumbing to it.", "title": "Concepts" }, { "paragraph_id": 18, "text": "Facticity is defined by Sartre in Being and Nothingness (1943) as the in-itself, which for humans takes the form of being and not being. It is the facts of one's personal life and as per Heidegger, it is \"the way in which we are thrown into the world.\" This can be more easily understood when considering facticity in relation to the temporal dimension of our past: one's past is what one is, meaning that it is what has formed the person who exists in the present. However, to say that one is only one's past would ignore the change a person undergoes in the present and future, while saying that one's past is only what one was, would entirely detach it from the present self. A denial of one's concrete past constitutes an inauthentic lifestyle, and also applies to other kinds of facticity (having a human body—e.g., one that does not allow a person to run faster than the speed of sound—identity, values, etc.).", "title": "Concepts" }, { "paragraph_id": 19, "text": "Facticity is a limitation and a condition of freedom. It is a limitation in that a large part of one's facticity consists of things one did not choose (birthplace, etc.), but a condition of freedom in the sense that one's values most likely depend on it. However, even though one's facticity is \"set in stone\" (as being past, for instance), it cannot determine a person: the value ascribed to one's facticity is still ascribed to it freely by that person. As an example, consider two men, one of whom has no memory of his past and the other who remembers everything. Both have committed many crimes, but the first man, remembering nothing, leads a rather normal life while the second man, feeling trapped by his own past, continues a life of crime, blaming his own past for \"trapping\" him in this life. There is nothing essential about his committing crimes, but he ascribes this meaning to his past.", "title": "Concepts" }, { "paragraph_id": 20, "text": "However, to disregard one's facticity during the continual process of self-making, projecting oneself into the future, would be to put oneself in denial of the conditions shaping the present self and would be inauthentic. The origin of one's projection must still be one's facticity, though in the mode of not being it (essentially). An example of one focusing solely on possible projects without reflecting on one's current facticity: would be someone who continually thinks about future possibilities related to being rich (e.g. a better car, bigger house, better quality of life, etc.) without acknowledging the facticity of not currently having the financial means to do so. In this example, considering both facticity and transcendence, an authentic mode of being would be considering future projects that might improve one's current finances (e.g. putting in extra hours, or investing savings) in order to arrive at a future-facticity of a modest pay rise, further leading to purchase of an affordable car.", "title": "Concepts" }, { "paragraph_id": 21, "text": "Another aspect of facticity is that it entails angst. Freedom \"produces\" angst when limited by facticity and the lack of the possibility of having facticity to \"step in\" and take responsibility for something one has done also produces angst.", "title": "Concepts" }, { "paragraph_id": 22, "text": "Another aspect of existential freedom is that one can change one's values. One is responsible for one's values, regardless of society's values. The focus on freedom in existentialism is related to the limits of responsibility one bears, as a result of one's freedom. The relationship between freedom and responsibility is one of interdependency and a clarification of freedom also clarifies that for which one is responsible.", "title": "Concepts" }, { "paragraph_id": 23, "text": "Many noted existentialists consider the theme of authentic existence important. Authenticity involves the idea that one has to \"create oneself\" and live in accordance with this self. For an authentic existence, one should act as oneself, not as \"one's acts\" or as \"one's genes\" or as any other essence requires. The authentic act is one in accordance with one's freedom. A component of freedom is facticity, but not to the degree that this facticity determines one's transcendent choices (one could then blame one's background for making the choice one made [chosen project, from one's transcendence]). Facticity, in relation to authenticity, involves acting on one's actual values when making a choice (instead of, like Kierkegaard's Aesthete, \"choosing\" randomly), so that one takes responsibility for the act instead of choosing either-or without allowing the options to have different values.", "title": "Concepts" }, { "paragraph_id": 24, "text": "In contrast, the inauthentic is the denial to live in accordance with one's freedom. This can take many forms, from pretending choices are meaningless or random, convincing oneself that some form of determinism is true, or \"mimicry\" where one acts as \"one should\".", "title": "Concepts" }, { "paragraph_id": 25, "text": "How one \"should\" act is often determined by an image one has, of how one in such a role (bank manager, lion tamer, sex worker, etc.) acts. In Being and Nothingness, Sartre uses the example of a waiter in \"bad faith\". He merely takes part in the \"act\" of being a typical waiter, albeit very convincingly. This image usually corresponds to a social norm, but this does not mean that all acting in accordance with social norms is inauthentic. The main point is the attitude one takes to one's own freedom and responsibility and the extent to which one acts in accordance with this freedom.", "title": "Concepts" }, { "paragraph_id": 26, "text": "The Other (written with a capital \"O\") is a concept more properly belonging to phenomenology and its account of intersubjectivity. However, it has seen widespread use in existentialist writings, and the conclusions drawn differ slightly from the phenomenological accounts. The Other is the experience of another free subject who inhabits the same world as a person does. In its most basic form, it is this experience of the Other that constitutes intersubjectivity and objectivity. To clarify, when one experiences someone else, and this Other person experiences the world (the same world that a person experiences)—only from \"over there\"—the world is constituted as objective in that it is something that is \"there\" as identical for both of the subjects; a person experiences the other person as experiencing the same things. This experience of the Other's look is what is termed the Look (sometimes the Gaze).", "title": "Concepts" }, { "paragraph_id": 27, "text": "While this experience, in its basic phenomenological sense, constitutes the world as objective and oneself as objectively existing subjectivity (one experiences oneself as seen in the Other's Look in precisely the same way that one experiences the Other as seen by him, as subjectivity), in existentialism, it also acts as a kind of limitation of freedom. This is because the Look tends to objectify what it sees. When one experiences oneself in the Look, one does not experience oneself as nothing (no thing), but as something (some thing). In Sartre's example of a man peeping at someone through a keyhole, the man is entirely caught up in the situation he is in. He is in a pre-reflexive state where his entire consciousness is directed at what goes on in the room. Suddenly, he hears a creaking floorboard behind him and he becomes aware of himself as seen by the Other. He is then filled with shame for he perceives himself as he would perceive someone else doing what he was doing—as a Peeping Tom. For Sartre, this phenomenological experience of shame establishes proof for the existence of other minds and defeats the problem of solipsism. For the conscious state of shame to be experienced, one has to become aware of oneself as an object of another look, proving a priori, that other minds exist. The Look is then co-constitutive of one's facticity.", "title": "Concepts" }, { "paragraph_id": 28, "text": "Another characteristic feature of the Look is that no Other really needs to have been there: It is possible that the creaking floorboard was simply the movement of an old house; the Look is not some kind of mystical telepathic experience of the actual way the Other sees one (there may have been someone there, but he could have not noticed that person). It is only one's perception of the way another might perceive him.", "title": "Concepts" }, { "paragraph_id": 29, "text": "\"Existential angst\", sometimes called existential dread, anxiety, or anguish, is a term common to many existentialist thinkers. It is generally held to be a negative feeling arising from the experience of human freedom and responsibility. The archetypal example is the experience one has when standing on a cliff where one not only fears falling off it, but also dreads the possibility of throwing oneself off. In this experience that \"nothing is holding me back\", one senses the lack of anything that predetermines one to either throw oneself off or to stand still, and one experiences one's own freedom.", "title": "Concepts" }, { "paragraph_id": 30, "text": "It can also be seen in relation to the previous point how angst is before nothing, and this is what sets it apart from fear that has an object. While one can take measures to remove an object of fear, for angst no such \"constructive\" measures are possible. The use of the word \"nothing\" in this context relates to the inherent insecurity about the consequences of one's actions and to the fact that, in experiencing freedom as angst, one also realizes that one is fully responsible for these consequences. There is nothing in people (genetically, for instance) that acts in their stead—that they can blame if something goes wrong. Therefore, not every choice is perceived as having dreadful possible consequences (and, it can be claimed, human lives would be unbearable if every choice facilitated dread). However, this does not change the fact that freedom remains a condition of every action.", "title": "Concepts" }, { "paragraph_id": 31, "text": "Despair is generally defined as a loss of hope. In existentialism, it is more specifically a loss of hope in reaction to a breakdown in one or more of the defining qualities of one's self or identity. If a person is invested in being a particular thing, such as a bus driver or an upstanding citizen, and then finds their being-thing compromised, they would normally be found in a state of despair—a hopeless state. For example, a singer who loses the ability to sing may despair if they have nothing else to fall back on—nothing to rely on for their identity. They find themselves unable to be what defined their being.", "title": "Concepts" }, { "paragraph_id": 32, "text": "What sets the existentialist notion of despair apart from the conventional definition is that existentialist despair is a state one is in even when they are not overtly in despair. So long as a person's identity depends on qualities that can crumble, they are in perpetual despair—and as there is, in Sartrean terms, no human essence found in conventional reality on which to constitute the individual's sense of identity, despair is a universal human condition. As Kierkegaard defines it in Either/Or: \"Let each one learn what he can; both of us can learn that a person's unhappiness never lies in his lack of control over external conditions, since this would only make him completely unhappy.\" In Works of Love, he says:", "title": "Concepts" }, { "paragraph_id": 33, "text": "When the God-forsaken worldliness of earthly life shuts itself in complacency, the confined air develops poison, the moment gets stuck and stands still, the prospect is lost, a need is felt for a refreshing, enlivening breeze to cleanse the air and dispel the poisonous vapors lest we suffocate in worldliness. ... Lovingly to hope all things is the opposite of despairingly to hope nothing at all. Love hopes all things—yet is never put to shame. To relate oneself expectantly to the possibility of the good is to hope. To relate oneself expectantly to the possibility of evil is to fear. By the decision to choose hope one decides infinitely more than it seems, because it is an eternal decision.", "title": "Concepts" }, { "paragraph_id": 34, "text": "Existentialists oppose defining human beings as primarily rational, and, therefore, oppose both positivism and rationalism. Existentialism asserts that people make decisions based on subjective meaning rather than pure rationality. The rejection of reason as the source of meaning is a common theme of existentialist thought, as is the focus on the anxiety and dread that we feel in the face of our own radical free will and our awareness of death. Kierkegaard advocated rationality as a means to interact with the objective world (e.g., in the natural sciences), but when it comes to existential problems, reason is insufficient: \"Human reason has boundaries\".", "title": "Opposition to positivism and rationalism" }, { "paragraph_id": 35, "text": "Like Kierkegaard, Sartre saw problems with rationality, calling it a form of \"bad faith\", an attempt by the self to impose structure on a world of phenomena—\"the Other\"—that is fundamentally irrational and random. According to Sartre, rationality and other forms of bad faith hinder people from finding meaning in freedom. To try to suppress feelings of anxiety and dread, people confine themselves within everyday experience, Sartre asserted, thereby relinquishing their freedom and acquiescing to being possessed in one form or another by \"the Look\" of \"the Other\" (i.e., possessed by another person—or at least one's idea of that other person).", "title": "Opposition to positivism and rationalism" }, { "paragraph_id": 36, "text": "An existentialist reading of the Bible would demand that the reader recognize that they are an existing subject studying the words more as a recollection of events. This is in contrast to looking at a collection of \"truths\" that are outside and unrelated to the reader, but may develop a sense of reality/God. Such a reader is not obligated to follow the commandments as if an external agent is forcing these commandments upon them, but as though they are inside them and guiding them from inside. This is the task Kierkegaard takes up when he asks: \"Who has the more difficult task: the teacher who lectures on earnest things a meteor's distance from everyday life—or the learner who should put it to use?\" Philosophers such as Similarly, Hans Jonas and Rudolph Bultmann introduced the concept of existentialist demythologization into the field of Early Christianity and Christian Theology, respectively.", "title": "Religion" }, { "paragraph_id": 37, "text": "Although nihilism and existentialism are distinct philosophies, they are often confused with one another since both are rooted in the human experience of anguish and confusion that stems from the apparent meaninglessness of a world in which humans are compelled to find or create meaning. A primary cause of confusion is that Friedrich Nietzsche was an important philosopher in both fields.", "title": "Confusion with nihilism" }, { "paragraph_id": 38, "text": "Existentialist philosophers often stress the importance of angst as signifying the absolute lack of any objective ground for action, a move that is often reduced to moral or existential nihilism. A pervasive theme in existentialist philosophy, however, is to persist through encounters with the absurd, as seen in Camus's The Myth of Sisyphus (\"One must imagine Sisyphus happy.\") and it is only very rarely that existentialist philosophers dismiss morality or one's self-created meaning: Kierkegaard regained a sort of morality in the religious (although he would not agree that it was ethical; the religious suspends the ethical), and Sartre's final words in Being and Nothingness are: \"All these questions, which refer us to a pure and not an accessory (or impure) reflection, can find their reply only on the ethical plane. We shall devote to them a future work.\"", "title": "Confusion with nihilism" }, { "paragraph_id": 39, "text": "Some have argued that existentialism has long been an element of European religious thought, even before the term came into use. William Barrett identified Blaise Pascal and Søren Kierkegaard as two specific examples. Jean Wahl also identified William Shakespeare's Prince Hamlet (\"To be, or not to be\"), Jules Lequier, Thomas Carlyle and William James as existentialists. According to Wahl, \"the origins of most great philosophies, like those of Plato, Descartes, and Kant, are to be found in existential reflections.\" Precursors to Existentialism can also be identified in the works of Iranian Islamic philosopher Mulla Sadra (c. 1571 - 1635) who would posit that \"existence precedes essence\" becoming the principle expositor of the School of Isfahan which is described as 'alive and active'.", "title": "History" }, { "paragraph_id": 40, "text": "Kierkegaard is generally considered to have been the first existentialist philosopher. He proposed that each individual—not reason, society, or religious orthodoxy—is solely tasked with giving meaning to life and living it sincerely, or \"authentically\".", "title": "History" }, { "paragraph_id": 41, "text": "Kierkegaard and Nietzsche were two of the first philosophers considered fundamental to the existentialist movement, though neither used the term \"existentialism\" and it is unclear whether they would have supported the existentialism of the 20th century. They focused on subjective human experience rather than the objective truths of mathematics and science, which they believed were too detached or observational to truly get at the human experience. Like Pascal, they were interested in people's quiet struggle with the apparent meaninglessness of life and the use of diversion to escape from boredom. Unlike Pascal, Kierkegaard and Nietzsche also considered the role of making free choices, particularly regarding fundamental values and beliefs, and how such choices change the nature and identity of the chooser. Kierkegaard's knight of faith and Nietzsche's Übermensch are representative of people who exhibit freedom, in that they define the nature of their own existence. Nietzsche's idealized individual invents his own values and creates the very terms they excel under. By contrast, Kierkegaard, opposed to the level of abstraction in Hegel, and not nearly as hostile (actually welcoming) to Christianity as Nietzsche, argues through a pseudonym that the objective certainty of religious truths (specifically Christian) is not only impossible, but even founded on logical paradoxes. Yet he continues to imply that a leap of faith is a possible means for an individual to reach a higher stage of existence that transcends and contains both an aesthetic and ethical value of life. Kierkegaard and Nietzsche were also precursors to other intellectual movements, including postmodernism, and various strands of psychotherapy. However, Kierkegaard believed that individuals should live in accordance with their thinking.", "title": "History" }, { "paragraph_id": 42, "text": "The first important literary author also important to existentialism was the Russian, Dostoyevsky. Dostoyevsky's Notes from Underground portrays a man unable to fit into society and unhappy with the identities he creates for himself. Sartre, in his book on existentialism Existentialism is a Humanism, quoted Dostoyevsky's The Brothers Karamazov as an example of existential crisis. Other Dostoyevsky novels covered issues raised in existentialist philosophy while presenting story lines divergent from secular existentialism: for example, in Crime and Punishment, the protagonist Raskolnikov experiences an existential crisis and then moves toward a Christian Orthodox worldview similar to that advocated by Dostoyevsky himself.", "title": "History" }, { "paragraph_id": 43, "text": "In the first decades of the 20th century, a number of philosophers and writers explored existentialist ideas. The Spanish philosopher Miguel de Unamuno y Jugo, in his 1913 book The Tragic Sense of Life in Men and Nations, emphasized the life of \"flesh and bone\" as opposed to that of abstract rationalism. Unamuno rejected systematic philosophy in favor of the individual's quest for faith. He retained a sense of the tragic, even absurd nature of the quest, symbolized by his enduring interest in the eponymous character from the Miguel de Cervantes novel Don Quixote. A novelist, poet and dramatist as well as philosophy professor at the University of Salamanca, Unamuno wrote a short story about a priest's crisis of faith, Saint Manuel the Good, Martyr, which has been collected in anthologies of existentialist fiction. Another Spanish thinker, José Ortega y Gasset, writing in 1914, held that human existence must always be defined as the individual person combined with the concrete circumstances of his life: \"Yo soy yo y mi circunstancia\" (\"I am myself and my circumstances\"). Sartre likewise believed that human existence is not an abstract matter, but is always situated (\"en situation\").", "title": "History" }, { "paragraph_id": 44, "text": "Although Martin Buber wrote his major philosophical works in German, and studied and taught at the Universities of Berlin and Frankfurt, he stands apart from the mainstream of German philosophy. Born into a Jewish family in Vienna in 1878, he was also a scholar of Jewish culture and involved at various times in Zionism and Hasidism. In 1938, he moved permanently to Jerusalem. His best-known philosophical work was the short book I and Thou, published in 1922. For Buber, the fundamental fact of human existence, too readily overlooked by scientific rationalism and abstract philosophical thought, is \"man with man\", a dialogue that takes place in the so-called \"sphere of between\" (\"das Zwischenmenschliche\").", "title": "History" }, { "paragraph_id": 45, "text": "Two Russian philosophers, Lev Shestov and Nikolai Berdyaev, became well known as existentialist thinkers during their post-Revolutionary exiles in Paris. Shestov had launched an attack on rationalism and systematization in philosophy as early as 1905 in his book of aphorisms All Things Are Possible. Berdyaev drew a radical distinction between the world of spirit and the everyday world of objects. Human freedom, for Berdyaev, is rooted in the realm of spirit, a realm independent of scientific notions of causation. To the extent the individual human being lives in the objective world, he is estranged from authentic spiritual freedom. \"Man\" is not to be interpreted naturalistically, but as a being created in God's image, an originator of free, creative acts. He published a major work on these themes, The Destiny of Man, in 1931.", "title": "History" }, { "paragraph_id": 46, "text": "Gabriel Marcel, long before coining the term \"existentialism\", introduced important existentialist themes to a French audience in his early essay \"Existence and Objectivity\" (1925) and in his Metaphysical Journal (1927). A dramatist as well as a philosopher, Marcel found his philosophical starting point in a condition of metaphysical alienation: the human individual searching for harmony in a transient life. Harmony, for Marcel, was to be sought through \"secondary reflection\", a \"dialogical\" rather than \"dialectical\" approach to the world, characterized by \"wonder and astonishment\" and open to the \"presence\" of other people and of God rather than merely to \"information\" about them. For Marcel, such presence implied more than simply being there (as one thing might be in the presence of another thing); it connoted \"extravagant\" availability, and the willingness to put oneself at the disposal of the other.", "title": "History" }, { "paragraph_id": 47, "text": "Marcel contrasted secondary reflection with abstract, scientific-technical primary reflection, which he associated with the activity of the abstract Cartesian ego. For Marcel, philosophy was a concrete activity undertaken by a sensing, feeling human being incarnate—embodied—in a concrete world. Although Sartre adopted the term \"existentialism\" for his own philosophy in the 1940s, Marcel's thought has been described as \"almost diametrically opposed\" to that of Sartre. Unlike Sartre, Marcel was a Christian, and became a Catholic convert in 1929.", "title": "History" }, { "paragraph_id": 48, "text": "In Germany, the psychologist and philosopher Karl Jaspers—who later described existentialism as a \"phantom\" created by the public—called his own thought, heavily influenced by Kierkegaard and Nietzsche, Existenzphilosophie. For Jaspers, \"Existenz-philosophy is the way of thought by means of which man seeks to become himself...This way of thought does not cognize objects, but elucidates and makes actual the being of the thinker\".", "title": "History" }, { "paragraph_id": 49, "text": "Jaspers, a professor at the university of Heidelberg, was acquainted with Heidegger, who held a professorship at Marburg before acceding to Husserl's chair at Freiburg in 1928. They held many philosophical discussions, but later became estranged over Heidegger's support of National Socialism. They shared an admiration for Kierkegaard, and in the 1930s, Heidegger lectured extensively on Nietzsche. Nevertheless, the extent to which Heidegger should be considered an existentialist is debatable. In Being and Time he presented a method of rooting philosophical explanations in human existence (Dasein) to be analysed in terms of existential categories (existentiale); and this has led many commentators to treat him as an important figure in the existentialist movement.", "title": "History" }, { "paragraph_id": 50, "text": "Following the Second World War, existentialism became a well-known and significant philosophical and cultural movement, mainly through the public prominence of two French writers, Jean-Paul Sartre and Albert Camus, who wrote best-selling novels, plays and widely read journalism as well as theoretical texts. These years also saw the growing reputation of Being and Time outside Germany.", "title": "History" }, { "paragraph_id": 51, "text": "Sartre dealt with existentialist themes in his 1938 novel Nausea and the short stories in his 1939 collection The Wall, and had published his treatise on existentialism, Being and Nothingness, in 1943, but it was in the two years following the liberation of Paris from the German occupying forces that he and his close associates—Camus, Simone de Beauvoir, Maurice Merleau-Ponty, and others—became internationally famous as the leading figures of a movement known as existentialism. In a very short period of time, Camus and Sartre in particular became the leading public intellectuals of post-war France, achieving by the end of 1945 \"a fame that reached across all audiences.\" Camus was an editor of the most popular leftist (former French Resistance) newspaper Combat; Sartre launched his journal of leftist thought, Les Temps Modernes, and two weeks later gave the widely reported lecture on existentialism and secular humanism to a packed meeting of the Club Maintenant. Beauvoir wrote that \"not a week passed without the newspapers discussing us\"; existentialism became \"the first media craze of the postwar era.\"", "title": "History" }, { "paragraph_id": 52, "text": "By the end of 1947, Camus' earlier fiction and plays had been reprinted, his new play Caligula had been performed and his novel The Plague published; the first two novels of Sartre's The Roads to Freedom trilogy had appeared, as had Beauvoir's novel The Blood of Others. Works by Camus and Sartre were already appearing in foreign editions. The Paris-based existentialists had become famous.", "title": "History" }, { "paragraph_id": 53, "text": "Sartre had traveled to Germany in 1930 to study the phenomenology of Edmund Husserl and Martin Heidegger, and he included critical comments on their work in his major treatise Being and Nothingness. Heidegger's thought had also become known in French philosophical circles through its use by Alexandre Kojève in explicating Hegel in a series of lectures given in Paris in the 1930s. The lectures were highly influential; members of the audience included not only Sartre and Merleau-Ponty, but Raymond Queneau, Georges Bataille, Louis Althusser, André Breton, and Jacques Lacan. A selection from Being and Time was published in French in 1938, and his essays began to appear in French philosophy journals.", "title": "History" }, { "paragraph_id": 54, "text": "Heidegger read Sartre's work and was initially impressed, commenting: \"Here for the first time I encountered an independent thinker who, from the foundations up, has experienced the area out of which I think. Your work shows such an immediate comprehension of my philosophy as I have never before encountered.\" Later, however, in response to a question posed by his French follower Jean Beaufret, Heidegger distanced himself from Sartre's position and existentialism in general in his Letter on Humanism. Heidegger's reputation continued to grow in France during the 1950s and 1960s. In the 1960s, Sartre attempted to reconcile existentialism and Marxism in his work Critique of Dialectical Reason. A major theme throughout his writings was freedom and responsibility.", "title": "History" }, { "paragraph_id": 55, "text": "Camus was a friend of Sartre, until their falling-out, and wrote several works with existential themes including The Rebel, Summer in Algiers, The Myth of Sisyphus, and The Stranger, the latter being \"considered—to what would have been Camus's irritation—the exemplary existentialist novel.\" Camus, like many others, rejected the existentialist label, and considered his works concerned with facing the absurd. In the titular book, Camus uses the analogy of the Greek myth of Sisyphus to demonstrate the futility of existence. In the myth, Sisyphus is condemned for eternity to roll a rock up a hill, but when he reaches the summit, the rock will roll to the bottom again. Camus believes that this existence is pointless but that Sisyphus ultimately finds meaning and purpose in his task, simply by continually applying himself to it. The first half of the book contains an extended rebuttal of what Camus took to be existentialist philosophy in the works of Kierkegaard, Shestov, Heidegger, and Jaspers.", "title": "History" }, { "paragraph_id": 56, "text": "Simone de Beauvoir, an important existentialist who spent much of her life as Sartre's partner, wrote about feminist and existentialist ethics in her works, including The Second Sex and The Ethics of Ambiguity. Although often overlooked due to her relationship with Sartre, de Beauvoir integrated existentialism with other forms of thinking such as feminism, unheard of at the time, resulting in alienation from fellow writers such as Camus.", "title": "History" }, { "paragraph_id": 57, "text": "Paul Tillich, an important existentialist theologian following Kierkegaard and Karl Barth, applied existentialist concepts to Christian theology, and helped introduce existential theology to the general public. His seminal work The Courage to Be follows Kierkegaard's analysis of anxiety and life's absurdity, but puts forward the thesis that modern humans must, via God, achieve selfhood in spite of life's absurdity. Rudolf Bultmann used Kierkegaard's and Heidegger's philosophy of existence to demythologize Christianity by interpreting Christian mythical concepts into existentialist concepts.", "title": "History" }, { "paragraph_id": 58, "text": "Maurice Merleau-Ponty, an existential phenomenologist, was for a time a companion of Sartre. Merleau-Ponty's Phenomenology of Perception (1945) was recognized as a major statement of French existentialism. It has been said that Merleau-Ponty's work Humanism and Terror greatly influenced Sartre. However, in later years they were to disagree irreparably, dividing many existentialists such as de Beauvoir, who sided with Sartre.", "title": "History" }, { "paragraph_id": 59, "text": "Colin Wilson, an English writer, published his study The Outsider in 1956, initially to critical acclaim. In this book and others (e.g. Introduction to the New Existentialism), he attempted to reinvigorate what he perceived as a pessimistic philosophy and bring it to a wider audience. He was not, however, academically trained, and his work was attacked by professional philosophers for lack of rigor and critical standards.", "title": "History" }, { "paragraph_id": 60, "text": "Stanley Kubrick's 1957 anti-war film Paths of Glory \"illustrates, and even illuminates...existentialism\" by examining the \"necessary absurdity of the human condition\" and the \"horror of war\". The film tells the story of a fictional World War I French army regiment ordered to attack an impregnable German stronghold; when the attack fails, three soldiers are chosen at random, court-martialed by a \"kangaroo court\", and executed by firing squad. The film examines existentialist ethics, such as the issue of whether objectivity is possible and the \"problem of authenticity\". Orson Welles's 1962 film The Trial, based upon Franz Kafka's book of the same name (Der Prozeß), is characteristic of both existentialist and absurdist themes in its depiction of a man (Joseph K.) arrested for a crime for which the charges are neither revealed to him nor to the reader.", "title": "Influence outside philosophy" }, { "paragraph_id": 61, "text": "Neon Genesis Evangelion is a Japanese science fiction animation series created by the anime studio Gainax and was both directed and written by Hideaki Anno. Existential themes of individuality, consciousness, freedom, choice, and responsibility are heavily relied upon throughout the entire series, particularly through the philosophies of Jean-Paul Sartre and Søren Kierkegaard. Episode 16's title, \"The Sickness Unto Death, And...\" (死に至る病、そして, Shi ni itaru yamai, soshite) is a reference to Kierkegaard's book, The Sickness Unto Death.", "title": "Influence outside philosophy" }, { "paragraph_id": 62, "text": "Some contemporary films dealing with existentialist issues include Melancholia, Fight Club, I Heart Huckabees, Waking Life, The Matrix, Ordinary People, Life in a Day, Barbie, and Everything Everywhere All at Once. Likewise, films throughout the 20th century such as The Seventh Seal, Ikiru, Taxi Driver, the Toy Story films, The Great Silence, Ghost in the Shell, Harold and Maude, High Noon, Easy Rider, One Flew Over the Cuckoo's Nest, A Clockwork Orange, Groundhog Day, Apocalypse Now, Badlands, and Blade Runner also have existentialist qualities.", "title": "Influence outside philosophy" }, { "paragraph_id": 63, "text": "Notable directors known for their existentialist films include Ingmar Bergman, Bela Tarr, Robert Bresson, Jean-Pierre Melville, François Truffaut, Jean-Luc Godard, Michelangelo Antonioni, Akira Kurosawa, Terrence Malick, Stanley Kubrick, Andrei Tarkovsky, Hideaki Anno, Wes Anderson, Gaspar Noé, Woody Allen, and Christopher Nolan. Charlie Kaufman's Synecdoche, New York focuses on the protagonist's desire to find existential meaning. Similarly, in Kurosawa's Red Beard, the protagonist's experiences as an intern in a rural health clinic in Japan lead him to an existential crisis whereby he questions his reason for being. This, in turn, leads him to a better understanding of humanity. The French film, Mood Indigo (directed by Michel Gondry) embraced various elements of existentialism. The film The Shawshank Redemption, released in 1994, depicts life in a prison in Maine, United States to explore several existentialist concepts.", "title": "Influence outside philosophy" }, { "paragraph_id": 64, "text": "Existential perspectives are also found in modern literature to varying degrees, especially since the 1920s. Louis-Ferdinand Céline's Journey to the End of the Night (Voyage au bout de la nuit, 1932) celebrated by both Sartre and Beauvoir, contained many of the themes that would be found in later existential literature, and is in some ways, the proto-existential novel. Jean-Paul Sartre's 1938 novel Nausea was \"steeped in Existential ideas\", and is considered an accessible way of grasping his philosophical stance. Between 1900 and 1960, other authors such as Albert Camus, Franz Kafka, Rainer Maria Rilke, T. S. Eliot, Hermann Hesse, Luigi Pirandello, Ralph Ellison, and Jack Kerouac composed literature or poetry that contained, to varying degrees, elements of existential or proto-existential thought. The philosophy's influence even reached pulp literature shortly after the turn of the 20th century, as seen in the existential disparity witnessed in Man's lack of control of his fate in the works of H. P. Lovecraft.", "title": "Influence outside philosophy" }, { "paragraph_id": 65, "text": "Sartre wrote No Exit in 1944, an existentialist play originally published in French as Huis Clos (meaning In Camera or \"behind closed doors\"), which is the source of the popular quote, \"Hell is other people.\" (In French, \"L'enfer, c'est les autres\"). The play begins with a Valet leading a man into a room that the audience soon realizes is in hell. Eventually he is joined by two women. After their entry, the Valet leaves and the door is shut and locked. All three expect to be tortured, but no torturer arrives. Instead, they realize they are there to torture each other, which they do effectively by probing each other's sins, desires, and unpleasant memories.", "title": "Influence outside philosophy" }, { "paragraph_id": 66, "text": "Existentialist themes are displayed in the Theatre of the Absurd, notably in Samuel Beckett's Waiting for Godot, in which two men divert themselves while they wait expectantly for someone (or something) named Godot who never arrives. They claim Godot is an acquaintance, but in fact, hardly know him, admitting they would not recognize him if they saw him. Samuel Beckett, once asked who or what Godot is, replied, \"If I knew, I would have said so in the play.\" To occupy themselves, the men eat, sleep, talk, argue, sing, play games, exercise, swap hats, and contemplate suicide—anything \"to hold the terrible silence at bay\". The play \"exploits several archetypal forms and situations, all of which lend themselves to both comedy and pathos.\" The play also illustrates an attitude toward human experience on earth: the poignancy, oppression, camaraderie, hope, corruption, and bewilderment of human experience that can be reconciled only in the mind and art of the absurdist. The play examines questions such as death, the meaning of human existence and the place of God in human existence.", "title": "Influence outside philosophy" }, { "paragraph_id": 67, "text": "Tom Stoppard's Rosencrantz & Guildenstern Are Dead is an absurdist tragicomedy first staged at the Edinburgh Festival Fringe in 1966. The play expands upon the exploits of two minor characters from Shakespeare's Hamlet. Comparisons have also been drawn to Samuel Beckett's Waiting for Godot, for the presence of two central characters who appear almost as two halves of a single character. Many plot features are similar as well: the characters pass time by playing Questions, impersonating other characters, and interrupting each other or remaining silent for long periods of time. The two characters are portrayed as two clowns or fools in a world beyond their understanding. They stumble through philosophical arguments while not realizing the implications, and muse on the irrationality and randomness of the world.", "title": "Influence outside philosophy" }, { "paragraph_id": 68, "text": "Jean Anouilh's Antigone also presents arguments founded on existentialist ideas. It is a tragedy inspired by Greek mythology and the play of the same name (Antigone, by Sophocles) from the fifth century BC. In English, it is often distinguished from its antecedent by being pronounced in its original French form, approximately \"Ante-GŌN.\" The play was first performed in Paris on 6 February 1944, during the Nazi occupation of France. Produced under Nazi censorship, the play is purposefully ambiguous with regards to the rejection of authority (represented by Antigone) and the acceptance of it (represented by Creon). The parallels to the French Resistance and the Nazi occupation have been drawn. Antigone rejects life as desperately meaningless but without affirmatively choosing a noble death. The crux of the play is the lengthy dialogue concerning the nature of power, fate, and choice, during which Antigone says that she is, \"... disgusted with [the]...promise of a humdrum happiness.\" She states that she would rather die than live a mediocre existence.", "title": "Influence outside philosophy" }, { "paragraph_id": 69, "text": "Critic Martin Esslin in his book Theatre of the Absurd pointed out how many contemporary playwrights such as Samuel Beckett, Eugène Ionesco, Jean Genet, and Arthur Adamov wove into their plays the existentialist belief that we are absurd beings loose in a universe empty of real meaning. Esslin noted that many of these playwrights demonstrated the philosophy better than did the plays by Sartre and Camus. Though most of such playwrights, subsequently labeled \"Absurdist\" (based on Esslin's book), denied affiliations with existentialism and were often staunchly anti-philosophical (for example Ionesco often claimed he identified more with 'Pataphysics or with Surrealism than with existentialism), the playwrights are often linked to existentialism based on Esslin's observation.", "title": "Influence outside philosophy" }, { "paragraph_id": 70, "text": "Black existentialism explores the existence and experiences of Black people in the world. Classical and contemporary thinkers include C.L.R James, Frederick Douglass, W.E.B DuBois, Frantz Fanon, Angela Davis, Cornell West, Naomi Zack, bell hooks, Stuart Hall, Lewis Gordon, and Audre Lorde.", "title": "Influence outside philosophy" }, { "paragraph_id": 71, "text": "A major offshoot of existentialism as a philosophy is existentialist psychology and psychoanalysis, which first crystallized in the work of Otto Rank, Freud's closest associate for 20 years. Without awareness of the writings of Rank, Ludwig Binswanger was influenced by Freud, Edmund Husserl, Heidegger, and Sartre. A later figure was Viktor Frankl, who briefly met Freud as a young man. His logotherapy can be regarded as a form of existentialist therapy. The existentialists would also influence social psychology, antipositivist micro-sociology, symbolic interactionism, and post-structuralism, with the work of thinkers such as Georg Simmel and Michel Foucault. Foucault was a great reader of Kierkegaard even though he almost never refers to this author, who nonetheless had for him an importance as secret as it was decisive.", "title": "Influence outside philosophy" }, { "paragraph_id": 72, "text": "An early contributor to existentialist psychology in the United States was Rollo May, who was strongly influenced by Kierkegaard and Otto Rank. One of the most prolific writers on techniques and theory of existentialist psychology in the US is Irvin D. Yalom. Yalom states that", "title": "Influence outside philosophy" }, { "paragraph_id": 73, "text": "Aside from their reaction against Freud's mechanistic, deterministic model of the mind and their assumption of a phenomenological approach in therapy, the existentialist analysts have little in common and have never been regarded as a cohesive ideological school. These thinkers—who include Ludwig Binswanger, Medard Boss, Eugène Minkowski, V. E. Gebsattel, Roland Kuhn, G. Caruso, F. T. Buytendijk, G. Bally, and Victor Frankl—were almost entirely unknown to the American psychotherapeutic community until Rollo May's highly influential 1958 book Existence—and especially his introductory essay—introduced their work into this country.", "title": "Influence outside philosophy" }, { "paragraph_id": 74, "text": "A more recent contributor to the development of a European version of existentialist psychotherapy is the British-based Emmy van Deurzen.", "title": "Influence outside philosophy" }, { "paragraph_id": 75, "text": "Anxiety's importance in existentialism makes it a popular topic in psychotherapy. Therapists often offer existentialist philosophy as an explanation for anxiety. The assertion is that anxiety is manifested of an individual's complete freedom to decide, and complete responsibility for the outcome of such decisions. Psychotherapists using an existentialist approach believe that a patient can harness his anxiety and use it constructively. Instead of suppressing anxiety, patients are advised to use it as grounds for change. By embracing anxiety as inevitable, a person can use it to achieve his full potential in life. Humanistic psychology also had major impetus from existentialist psychology and shares many of the fundamental tenets. Terror management theory, based on the writings of Ernest Becker and Otto Rank, is a developing area of study within the academic study of psychology. It looks at what researchers claim are implicit emotional reactions of people confronted with the knowledge that they will eventually die.", "title": "Influence outside philosophy" }, { "paragraph_id": 76, "text": "Also, Gerd B. Achenbach has refreshed the Socratic tradition with his own blend of philosophical counseling; as did Michel Weber with his Chromatiques Center in Belgium.", "title": "Influence outside philosophy" }, { "paragraph_id": 77, "text": "Walter Kaufmann criticized \"the profoundly unsound methods and the dangerous contempt for reason that have been so prominent in existentialism.\" Logical positivist philosophers, such as Rudolf Carnap and A. J. Ayer, assert that existentialists are often confused about the verb \"to be\" in their analyses of \"being\". Specifically, they argue that the verb \"is\" is transitive and pre-fixed to a predicate (e.g., an apple is red) (without a predicate, the word \"is\" is meaningless), and that existentialists frequently misuse the term in this manner. Wilson has stated in his book The Angry Years that existentialism has created many of its own difficulties: \"We can see how this question of freedom of the will has been vitiated by post-romantic philosophy, with its inbuilt tendency to laziness and boredom, we can also see how it came about that existentialism found itself in a hole of its own digging, and how the philosophical developments since then have amounted to walking in circles round that hole.\"", "title": "Criticisms" }, { "paragraph_id": 78, "text": "Many critics argue Sartre's philosophy is contradictory. Specifically, they argue that Sartre makes metaphysical arguments despite his claiming that his philosophical views ignore metaphysics. Herbert Marcuse criticized Being and Nothingness for projecting anxiety and meaninglessness onto the nature of existence itself: \"Insofar as Existentialism is a philosophical doctrine, it remains an idealistic doctrine: it hypostatizes specific historical conditions of human existence into ontological and metaphysical characteristics. Existentialism thus becomes part of the very ideology which it attacks, and its radicalism is illusory.\"", "title": "Criticisms" }, { "paragraph_id": 79, "text": "In Letter on Humanism, Heidegger criticized Sartre's existentialism:", "title": "Criticisms" }, { "paragraph_id": 80, "text": "Existentialism says existence precedes essence. In this statement he is taking existentia and essentia according to their metaphysical meaning, which, from Plato's time on, has said that essentia precedes existentia. Sartre reverses this statement. But the reversal of a metaphysical statement remains a metaphysical statement. With it, he stays with metaphysics, in oblivion of the truth of Being.", "title": "Criticisms" } ]
Existentialism is a form of philosophical inquiry that explores the issue of human existence. Existentialist philosophers explore questions related to the meaning, purpose, and value of human existence. Common concepts in existentialist thought include existential crisis, dread, and anxiety in the face of an absurd world, as well as authenticity, courage, and virtue. Existentialism is associated with several 19th- and 20th-century European philosophers who shared an emphasis on the human subject, despite often profound differences in thought. Among the earliest figures associated with existentialism are philosophers Søren Kierkegaard, Friedrich Nietzsche and novelist Fyodor Dostoevsky, all of whom critiqued rationalism and concerned themselves with the problem of meaning. In the 20th century, prominent existentialist thinkers included Jean-Paul Sartre, Albert Camus, Martin Heidegger, Simone de Beauvoir, Karl Jaspers, Gabriel Marcel, and Paul Tillich. Many existentialists considered traditional systematic or academic philosophies, in style and content, to be too abstract and removed from concrete human experience. A primary virtue in existentialist thought is authenticity. Existentialism would influence many disciplines outside of philosophy, including theology, drama, art, literature, and psychology. Existentialist philosophy encompasses a range of perspectives, but it shares certain underlying concepts. Among these, a central tenet of existentialism is that personal freedom, individual responsibility, and deliberate choice are essential to the pursuit of self-discovery and the determination of life's meaning.
2001-08-17T16:28:24Z
2023-11-28T00:13:41Z
[ "Template:Main", "Template:Blockquote", "Template:Rp", "Template:Cite IEP", "Template:Lang", "Template:Existentialism", "Template:For", "Template:See also", "Template:Cite news", "Template:Curlie", "Template:Short description", "Template:Lang-fr", "Template:Technical", "Template:Clarify", "Template:Unreferenced section", "Template:Cite encyclopedia", "Template:Authority control", "Template:Redirect", "Template:Multiple image", "Template:Sfn", "Template:Citation needed", "Template:Reflist", "Template:Refbegin", "Template:Div col", "Template:Div col end", "Template:Cite book", "Template:In lang", "Template:Sister project links", "Template:Cite SEP", "Template:Nihongo", "Template:Citation", "Template:In Our Time", "Template:Cite journal", "Template:Cite web", "Template:Refend", "Template:Navboxes" ]
https://en.wikipedia.org/wiki/Existentialism
9,596
Ellipsis
The ellipsis ... (/əˈlɪpsɪs/; also known informally as dot dot dot) is a series of dots that indicates an intentional omission of a word, sentence, or whole section from a text without altering its original meaning. The plural is ellipses. The term originates from the Ancient Greek: ἔλλειψις, élleipsis meaning 'leave out'. Opinions differ as to how to render ellipses in printed material. According to The Chicago Manual of Style, it should consist of three periods, each separated from its neighbor by a non-breaking space: . . .. According to the AP Stylebook, the periods should be rendered with no space between them: .... A third option is to use the Unicode character U+2026 … HORIZONTAL ELLIPSIS. The ellipsis is also called a suspension point, points of ellipsis, periods of ellipsis, or (colloquially) "dot-dot-dot". Depending on their context and placement in a sentence, ellipses can indicate an unfinished thought, a leading statement, a slight pause, an echoing voice, or a nervous or awkward silence. Aposiopesis is the use of an ellipsis to trail off into silence—for example: "But I thought he was..." When placed at the end of a sentence, an ellipsis may be used to suggest melancholy or longing. The most common forms of an ellipsis include a row of three periods or full points ... or a precomposed triple-dot glyph, the horizontal ellipsis …. Style guides often have their own rules governing the use of ellipses. For example, The Chicago Manual of Style (Chicago style) recommends that an ellipsis be formed by typing three periods, each with a space on both sides . . . , while the Associated Press Stylebook (AP style) puts the dots together, but retains a space before and after the group, thus: ... . Whether an ellipsis at the end of a sentence needs a fourth dot to finish the sentence is a matter of debate; Chicago advises it, as does the Publication Manual of the American Psychological Association (APA style), while some other style guides do not; the Merriam-Webster Dictionary and related works treat this style as optional, saying that it "may" be used. When text is omitted following a sentence, a normal full stop (period) terminates the sentence, and then a separate three-dot ellipsis is commonly used to indicate one or more subsequent omitted sentences before continuing a longer quotation. Business Insider magazine suggests this style and it is also used in many academic journals. The Associated Press Stylebook favors this approach. In her book on the ellipsis, Ellipsis in English Literature: Signs of Omission, Anne Toner suggests that the first use of the punctuation in the English language dates to a 1588 translation of Terence's Andria, by Maurice Kyffin. In this case, however, the ellipsis consists not of dots but of short dashes. "Subpuncting" of medieval manuscripts also denotes omitted meaning and may be related. Occasionally, it would be used in pulp fiction and other works of early 20th-century fiction to denote expletives that would otherwise have been censored. An ellipsis may also imply an unstated alternative indicated by context. For example, "I never drink wine ..." implies that the speaker does drink something else—such as vodka. In reported speech, the ellipsis can be used to represent an intentional silence. In poetry, an ellipsis is used as a thought-pause or line break at the caesura or this is used to highlight sarcasm or make the reader think about the last points in the poem. In news reporting, often put inside square brackets, it is used to indicate that a quotation has been condensed for space, brevity or relevance, as in "The President said that [...] he would not be satisfied", where the exact quotation was "The President said that, for as long as this situation continued, he would not be satisfied". Herb Caen, Pulitzer-prize-winning columnist for the San Francisco Chronicle, became famous for his "three-dot journalism". The Chicago Manual of Style suggests the use of an ellipsis for any omitted word, phrase, line, or paragraph from within but not at the end of a quoted passage. There are two commonly used methods of using ellipses: one uses three dots for any omission, while the second one makes a distinction between omissions within a sentence (using three dots: . . .) and omissions between sentences (using a period and a space followed by three dots: . ...). The Chicago Style Q&A recommends that writers avoid using the precomposed … (U+2026) character in manuscripts and to place three periods plus two nonbreaking spaces (. . .) instead, leaving the editor, publisher, or typographer to replace them later. The Modern Language Association (MLA) used to indicate that an ellipsis must include spaces before and after each dot in all uses. If an ellipsis is meant to represent an omission, square brackets must surround the ellipsis to make it clear that there was no pause in the original quote: [ . . . ]. Currently, the MLA has removed the requirement of brackets in its style handbooks. However, some maintain that the use of brackets is still correct because it clears confusion. The MLA now indicates that a three-dot, spaced ellipsis . . . should be used for removing material from within one sentence within a quote. When crossing sentences (when the omitted text contains a period, so that omitting the end of a sentence counts), a four-dot, spaced (except for before the first dot) ellipsis . . . . should be used. When ellipsis points are used in the original text, ellipsis points that are not in the original text should be distinguished by enclosing them in square brackets (e.g. text [...] text). According to the Associated Press, the ellipsis should be used to condense quotations. It is less commonly used to indicate a pause in speech or an unfinished thought or to separate items in material such as show business gossip. The stylebook indicates that if the shortened sentence before the mark can stand as a sentence, it should do so, with an ellipsis placed after the period or other ending punctuation. When material is omitted at the end of a paragraph and also immediately following it, an ellipsis goes both at the end of that paragraph and at the beginning of the next, according to this style. According to Robert Bringhurst's Elements of Typographic Style, the details of typesetting ellipses depend on the character and size of the font being set and the typographer's preference. Bringhurst writes that a full space between each dot is "another Victorian eccentricity. In most contexts, the Chicago ellipsis is much too wide"—he recommends using flush dots (with a normal word space before and after), or thin-spaced dots (up to one-fifth of an em), or the prefabricated ellipsis character U+2026 … HORIZONTAL ELLIPSIS (&hellip;, &mldr;). Bringhurst suggests that normally an ellipsis should be spaced fore-and-aft to separate it from the text, but when it combines with other punctuation, the leading space disappears and the other punctuation follows. This is the usual practice in typesetting. He provides the following examples: In legal writing in the United States, Rule 5.3 in the Bluebook citation guide governs the use of ellipses and requires a space before the first dot and between the two subsequent dots. If an ellipsis ends the sentence, then there are three dots, each separated by a space, followed by the final punctuation (e.g. Hah . . . ?). In some legal writing, an ellipsis is written as three asterisks, *** or * * *, to make it obvious that text has been omitted or to signal that the omitted text extends beyond the end of the paragraph. The Oxford Style Guide recommends setting the ellipsis as a single character … or as a series of three (narrow) spaced dots surrounded by spaces, thus: . . . . If there is an ellipsis at the end of an incomplete sentence, the final full stop is omitted. However, it is retained if the following ellipsis represents an omission between two complete sentences. The … fox jumps … The quick brown fox jumps over the lazy dog. … And if they have not died, they are still alive today. It is not cold … it is freezing cold. Contrary to The Oxford Style Guide, the University of Oxford Style Guide demands an ellipsis not to be surrounded by spaces, except when it stands for a pause; then, a space has to be set after the ellipsis (but not before). An ellipsis is never preceded or followed by a full stop. The...fox jumps... The quick brown fox jumps over the lazy dog...And if they have not died, they are still alive today. It is not cold... it is freezing cold. When applied in Polish syntax, the ellipsis is called wielokropek, literally 'multidot'. The word wielokropek distinguishes the ellipsis of Polish syntax from that of mathematical notation, in which it is known as an elipsa. When an ellipsis replaces a fragment omitted from a quotation, the ellipsis is enclosed in parentheses or square brackets. An unbracketed ellipsis indicates an interruption or pause in speech. The syntactic rules for ellipses are standardized by the 1983 Polska Norma document PN-83/P-55366, Zasady składania tekstów w języku polskim (Rules for Setting Texts in Polish). The combination "ellipsis+period" is replaced by the ellipsis. The combinations "ellipsis+exclamation mark" and "ellipsis+question mark" are written in this way: !.. ?.. The most common character corresponding to an ellipsis is called 3-ten rīdā ("3-dot leaders", …). 2-ten rīdā exists as a character, but it is used less commonly. In writing, the ellipsis consists usually of six dots (two 3-ten rīdā characters, ……). Three dots (one 3-ten rīdā character) may be used where space is limited, such as in a header. However, variations in the number of dots exist. In horizontally written text the dots are commonly vertically centered within the text height (between the baseline and the ascent line), as in the standard Japanese Windows fonts; in vertically written text the dots are always centered horizontally. As the Japanese word for dot is pronounced "ten", the dots are colloquially called "ten-ten-ten" (てんてんてん, akin to the English "dot dot dot"). In text in Japanese media, such as in manga or video games, ellipses are much more frequent than in English, and are often changed to another punctuation sign in translation. The ellipsis by itself represents speechlessness, or a "pregnant pause". Depending on the context, this could be anything from an admission of guilt to an expression of being dumbfounded at another person's words or actions. As a device, the ten-ten-ten is intended to focus the reader on a character while allowing the character to not speak any dialogue. This conveys to the reader a focus of the narrative "camera" on the silent subject, implying an expectation of some motion or action. It is not unheard of to see inanimate objects "speaking" the ellipsis. In Chinese, the ellipsis is six dots (in two groups of three dots, occupying the same horizontal or vertical space as two characters). In horizontally written text the dots are commonly vertically centered within the text height (between the baseline and the ascent line, i.e. ⋯⋯); in vertically written text the dots are always centered horizontally (i.e. Chinese: ︙︙). In Spanish, the ellipsis is commonly used as a substitute of et cetera at the end of unfinished lists. So it means "and so forth" or "and other things". Other use is the suspension of a part of a text, or a paragraph, or a phrase or a part of a word because it is obvious, or unnecessary, or implied. For instance, sometimes the ellipsis is used to avoid the complete use of expletives. When the ellipsis is placed alone into a parenthesis (...) or—less often—between brackets [...], which is what happens usually within a text transcription, it means the original text had more contents on the same position but are not useful to our target in the transcription. When the suppressed text is at the beginning or at the end of a text, the ellipsis does not need to be placed in a parenthesis. The number of dots is three and only three. In French, the ellipsis is commonly used at the end of lists to represent et cetera. In French typography, the ellipsis is written immediately after the preceding word, but has a space after it, for example: comme ça... pas comme ceci. If, exceptionally, it begins a sentence, there is a space before and after, for example: Lui ? ... vaut rien, je crois.... However, any omitted word, phrase or line at the end of a quoted passage would be indicated as follows: [...] (space before and after the square brackets but not inside), for example: ... à Paris, Nice, Nantes, Toulouse [...]. In German, the ellipsis in general is surrounded by spaces, if it stands for one or more omitted words. On the other side there is no space between a letter or (part of) a word and an ellipsis, if it stands for one or more omitted letters, that should stick to the written letter or letters. Example for both cases, using German style: The first el...is stands for omitted letters, the second ... for an omitted word. If the ellipsis is at the end of a sentence, the final full stop is omitted. Example: I think that ... The Accademia della Crusca suggests the use of an ellipsis ("puntini di sospensione") to indicate a pause longer than a period and, when placed between brackets, the omission of letters, words or phrases. "Tra le cose più preziose possedute da Andrea Sperelli era una coperta di seta fina, d’un colore azzurro disfatto, intorno a cui giravano i dodici segni dello Zodiaco in ricamo, con le denominazioni […] a caratteri gotici." (Gabriele D’Annunzio, Il piacere) In computer menu functions or buttons, an ellipsis means that upon selection more options (sometimes in the form of a dialog box) will be displayed, where the user can or must make a choice. If the ellipsis is absent, the function is immediately executed upon selection. For example, the menu item "Save" indicates that the file will be overwritten without further input, whereas "Save as..." indicates that a dialog follows where the user can, for example, select another location, file name, or format. Ellipses are also used as a separate button (particularly considering the limited screen area of mobile apps) to represent partially or completely hidden options. This usage may alternatively be described as a "More button" (see also hamburger button signifying completely hidden options). In mobile, web, and general application design, the vertical ellipsis, ⋮, is sometimes used as an interface element, where it is sometimes called a kebab icon. The element typically indicates that a navigation menu can be accessed when the element is activated, and is a smaller version of the hamburger icon (≡) which is a stylized rendering of a menu. An ellipsis is also often used in mathematics to mean "and so forth". In a list, between commas, or following a comma, a normal ellipsis is used, as in: or to mean an infinite list, as: To indicate the omission of values in a repeated operation, an ellipsis raised to the center of the line is used between two operation symbols or following the last operation symbol, as in: Sometimes, e.g. in Russian mathematical texts, normal, non-raised, ellipses are used even in repeated summations. The latter formula means the sum of all natural numbers from 1 to 100. However, it is not a formally defined mathematical symbol. Repeated summations or products may similarly be denoted using capital sigma and capital pi notation, respectively: Normally dots should be used only where the pattern to be followed is clear, the exception being to show the indefinite continuation of an irrational number such as: Sometimes, it is useful to display a formula compactly, for example: Another example is the set of positive zeros of the cosine function: There are many related uses of the ellipsis in set notation. The diagonal and vertical forms of the ellipsis are particularly useful for showing missing terms in matrices, such as the size-n identity matrix: A two- or three-dot ellipsis is used as an operator in some programming languages. One of its most common uses is in defining ranges or sequences, for instance 1..10 means all the numbers from 1 through 10. This is used in many languages, including Pascal, Modula, Oberon, Ada, Haskell, Perl, Ruby, Rust, Swift, Kotlin, Bash shell and F#. It is also used to indicate variadic functions in the C, C++ and Java languages. The CSS text-overflow property can be set to ellipsis, which cuts off text with an ellipsis when it overflows the content area. The ellipsis is a non-verbal cue that is often used in computer-mediated interactions, in particular in synchronous genres, such as chat. The reason behind its popularity is the fact that it allows people to indicate in writing several functions: Although an ellipsis is technically complete with three periods (...), its rise in popularity as a "trailing-off" or "silence" indicator, particularly in mid-20th-century comic strip and comic book prose writing, has led to expanded uses online. Today, extended ellipses anywhere from two to dozens of periods have become common constructions in Internet chat rooms and text messages. The extent of repetition in itself might serve as an additional contextualization or paralinguistic cue; one paper wrote that they "extend the lexical meaning of the words, add character to the sentences, and allow fine-tuning and personalisation of the message". In some text messaging software products, an ellipsis is displayed while the interlocutor is typing characters. The feature has been referred to as a typing awareness indicator. Rows of dots are also used to indicate that a longer-lasting operation is in progress (e.g. in the initial startup messages of text-mode operating systems like DOS or in bootsectors, i.e. "Loading...", "Starting..."). Sometimes this is implemented as an animated progress indicator where more dots are added after certain sub-operations (like loading a single sector) have finished (i.e. "Loading....."). In computing, several ellipsis characters have been codified, depending on the system used. In the Unicode standard, there are the following characters: Unicode recognizes a series of three period characters (U+002E) as compatibility equivalent (though not canonical) to the horizontal ellipsis character. In HTML, the horizontal ellipsis character may be represented by the entity reference &hellip; (since HTML 4.0), and the vertical ellipsis character by the entity reference &vellip; (since HTML 5.0). Alternatively, in HTML, XML, and SGML, a numeric character reference such as &#x2026; or &#8230; can be used. In the TeX typesetting system, the following types of ellipsis are available: In LaTeX, note that the reverse orientation of \ddots can be achieved with \reflectbox provided by the graphicx package: \reflectbox{\ddots} yields . With the amsmath package from AMS-LaTeX, more specific ellipses are provided for math mode. The horizontal ellipsis character also appears in the following older character maps: Note that ISO/IEC 8859 encoding series provides no code point for ellipsis. As with all characters, especially those outside the ASCII range, the author, sender and receiver of an encoded ellipsis must be in agreement upon what bytes are being used to represent the character. Naive text processing software may improperly assume that a particular encoding is being used, resulting in mojibake. In Windows, the horizontal ellipsis can be inserted with Alt+0133, using the numeric keypad. In macOS, it can be inserted with ⌥ Opt+; (on an English language keyboard). In some Linux distributions, it can be inserted with AltGr+. (this produces an interpunct on other systems), or Compose... In Android, ellipsis is a long-press key. If Gboard is in alphanumeric layout, change to numeric and special characters layout by pressing ?123 from alphanumeric layout. Once in numeric and special characters layout, long press . key to insert an ellipsis. This is a single symbol without spaces in between the three dots ( … ). In Chinese and sometimes in Japanese, ellipsis characters are made by entering two consecutive horizontal ellipses, each with Unicode code point U+2026. In vertical texts, the application should rotate the symbol accordingly.
[ { "paragraph_id": 0, "text": "The ellipsis ... (/əˈlɪpsɪs/; also known informally as dot dot dot) is a series of dots that indicates an intentional omission of a word, sentence, or whole section from a text without altering its original meaning. The plural is ellipses. The term originates from the Ancient Greek: ἔλλειψις, élleipsis meaning 'leave out'.", "title": "" }, { "paragraph_id": 1, "text": "Opinions differ as to how to render ellipses in printed material. According to The Chicago Manual of Style, it should consist of three periods, each separated from its neighbor by a non-breaking space: . . .. According to the AP Stylebook, the periods should be rendered with no space between them: .... A third option is to use the Unicode character U+2026 … HORIZONTAL ELLIPSIS.", "title": "" }, { "paragraph_id": 2, "text": "The ellipsis is also called a suspension point, points of ellipsis, periods of ellipsis, or (colloquially) \"dot-dot-dot\". Depending on their context and placement in a sentence, ellipses can indicate an unfinished thought, a leading statement, a slight pause, an echoing voice, or a nervous or awkward silence. Aposiopesis is the use of an ellipsis to trail off into silence—for example: \"But I thought he was...\" When placed at the end of a sentence, an ellipsis may be used to suggest melancholy or longing.", "title": "Background" }, { "paragraph_id": 3, "text": "The most common forms of an ellipsis include a row of three periods or full points ... or a precomposed triple-dot glyph, the horizontal ellipsis …. Style guides often have their own rules governing the use of ellipses. For example, The Chicago Manual of Style (Chicago style) recommends that an ellipsis be formed by typing three periods, each with a space on both sides . . . , while the Associated Press Stylebook (AP style) puts the dots together, but retains a space before and after the group, thus: ... . Whether an ellipsis at the end of a sentence needs a fourth dot to finish the sentence is a matter of debate; Chicago advises it, as does the Publication Manual of the American Psychological Association (APA style), while some other style guides do not; the Merriam-Webster Dictionary and related works treat this style as optional, saying that it \"may\" be used.", "title": "Background" }, { "paragraph_id": 4, "text": "When text is omitted following a sentence, a normal full stop (period) terminates the sentence, and then a separate three-dot ellipsis is commonly used to indicate one or more subsequent omitted sentences before continuing a longer quotation. Business Insider magazine suggests this style and it is also used in many academic journals. The Associated Press Stylebook favors this approach.", "title": "Background" }, { "paragraph_id": 5, "text": "In her book on the ellipsis, Ellipsis in English Literature: Signs of Omission, Anne Toner suggests that the first use of the punctuation in the English language dates to a 1588 translation of Terence's Andria, by Maurice Kyffin. In this case, however, the ellipsis consists not of dots but of short dashes. \"Subpuncting\" of medieval manuscripts also denotes omitted meaning and may be related.", "title": "In writing" }, { "paragraph_id": 6, "text": "Occasionally, it would be used in pulp fiction and other works of early 20th-century fiction to denote expletives that would otherwise have been censored.", "title": "In writing" }, { "paragraph_id": 7, "text": "An ellipsis may also imply an unstated alternative indicated by context. For example, \"I never drink wine ...\" implies that the speaker does drink something else—such as vodka.", "title": "In writing" }, { "paragraph_id": 8, "text": "In reported speech, the ellipsis can be used to represent an intentional silence.", "title": "In writing" }, { "paragraph_id": 9, "text": "In poetry, an ellipsis is used as a thought-pause or line break at the caesura or this is used to highlight sarcasm or make the reader think about the last points in the poem.", "title": "In writing" }, { "paragraph_id": 10, "text": "In news reporting, often put inside square brackets, it is used to indicate that a quotation has been condensed for space, brevity or relevance, as in \"The President said that [...] he would not be satisfied\", where the exact quotation was \"The President said that, for as long as this situation continued, he would not be satisfied\".", "title": "In writing" }, { "paragraph_id": 11, "text": "Herb Caen, Pulitzer-prize-winning columnist for the San Francisco Chronicle, became famous for his \"three-dot journalism\".", "title": "In writing" }, { "paragraph_id": 12, "text": "The Chicago Manual of Style suggests the use of an ellipsis for any omitted word, phrase, line, or paragraph from within but not at the end of a quoted passage. There are two commonly used methods of using ellipses: one uses three dots for any omission, while the second one makes a distinction between omissions within a sentence (using three dots: . . .) and omissions between sentences (using a period and a space followed by three dots: . ...). The Chicago Style Q&A recommends that writers avoid using the precomposed … (U+2026) character in manuscripts and to place three periods plus two nonbreaking spaces (. . .) instead, leaving the editor, publisher, or typographer to replace them later.", "title": "In different languages" }, { "paragraph_id": 13, "text": "The Modern Language Association (MLA) used to indicate that an ellipsis must include spaces before and after each dot in all uses. If an ellipsis is meant to represent an omission, square brackets must surround the ellipsis to make it clear that there was no pause in the original quote: [ . . . ]. Currently, the MLA has removed the requirement of brackets in its style handbooks. However, some maintain that the use of brackets is still correct because it clears confusion.", "title": "In different languages" }, { "paragraph_id": 14, "text": "The MLA now indicates that a three-dot, spaced ellipsis . . . should be used for removing material from within one sentence within a quote. When crossing sentences (when the omitted text contains a period, so that omitting the end of a sentence counts), a four-dot, spaced (except for before the first dot) ellipsis . . . . should be used. When ellipsis points are used in the original text, ellipsis points that are not in the original text should be distinguished by enclosing them in square brackets (e.g. text [...] text).", "title": "In different languages" }, { "paragraph_id": 15, "text": "According to the Associated Press, the ellipsis should be used to condense quotations. It is less commonly used to indicate a pause in speech or an unfinished thought or to separate items in material such as show business gossip. The stylebook indicates that if the shortened sentence before the mark can stand as a sentence, it should do so, with an ellipsis placed after the period or other ending punctuation. When material is omitted at the end of a paragraph and also immediately following it, an ellipsis goes both at the end of that paragraph and at the beginning of the next, according to this style.", "title": "In different languages" }, { "paragraph_id": 16, "text": "According to Robert Bringhurst's Elements of Typographic Style, the details of typesetting ellipses depend on the character and size of the font being set and the typographer's preference. Bringhurst writes that a full space between each dot is \"another Victorian eccentricity. In most contexts, the Chicago ellipsis is much too wide\"—he recommends using flush dots (with a normal word space before and after), or thin-spaced dots (up to one-fifth of an em), or the prefabricated ellipsis character U+2026 … HORIZONTAL ELLIPSIS (&hellip;, &mldr;). Bringhurst suggests that normally an ellipsis should be spaced fore-and-aft to separate it from the text, but when it combines with other punctuation, the leading space disappears and the other punctuation follows. This is the usual practice in typesetting. He provides the following examples:", "title": "In different languages" }, { "paragraph_id": 17, "text": "In legal writing in the United States, Rule 5.3 in the Bluebook citation guide governs the use of ellipses and requires a space before the first dot and between the two subsequent dots. If an ellipsis ends the sentence, then there are three dots, each separated by a space, followed by the final punctuation (e.g. Hah . . . ?). In some legal writing, an ellipsis is written as three asterisks, *** or * * *, to make it obvious that text has been omitted or to signal that the omitted text extends beyond the end of the paragraph.", "title": "In different languages" }, { "paragraph_id": 18, "text": "The Oxford Style Guide recommends setting the ellipsis as a single character … or as a series of three (narrow) spaced dots surrounded by spaces, thus: . . . . If there is an ellipsis at the end of an incomplete sentence, the final full stop is omitted. However, it is retained if the following ellipsis represents an omission between two complete sentences.", "title": "In different languages" }, { "paragraph_id": 19, "text": "The … fox jumps … The quick brown fox jumps over the lazy dog. … And if they have not died, they are still alive today. It is not cold … it is freezing cold.", "title": "In different languages" }, { "paragraph_id": 20, "text": "Contrary to The Oxford Style Guide, the University of Oxford Style Guide demands an ellipsis not to be surrounded by spaces, except when it stands for a pause; then, a space has to be set after the ellipsis (but not before). An ellipsis is never preceded or followed by a full stop.", "title": "In different languages" }, { "paragraph_id": 21, "text": "The...fox jumps... The quick brown fox jumps over the lazy dog...And if they have not died, they are still alive today. It is not cold... it is freezing cold.", "title": "In different languages" }, { "paragraph_id": 22, "text": "When applied in Polish syntax, the ellipsis is called wielokropek, literally 'multidot'. The word wielokropek distinguishes the ellipsis of Polish syntax from that of mathematical notation, in which it is known as an elipsa. When an ellipsis replaces a fragment omitted from a quotation, the ellipsis is enclosed in parentheses or square brackets. An unbracketed ellipsis indicates an interruption or pause in speech. The syntactic rules for ellipses are standardized by the 1983 Polska Norma document PN-83/P-55366, Zasady składania tekstów w języku polskim (Rules for Setting Texts in Polish).", "title": "In different languages" }, { "paragraph_id": 23, "text": "The combination \"ellipsis+period\" is replaced by the ellipsis. The combinations \"ellipsis+exclamation mark\" and \"ellipsis+question mark\" are written in this way: !.. ?..", "title": "In different languages" }, { "paragraph_id": 24, "text": "The most common character corresponding to an ellipsis is called 3-ten rīdā (\"3-dot leaders\", …). 2-ten rīdā exists as a character, but it is used less commonly. In writing, the ellipsis consists usually of six dots (two 3-ten rīdā characters, ……). Three dots (one 3-ten rīdā character) may be used where space is limited, such as in a header. However, variations in the number of dots exist. In horizontally written text the dots are commonly vertically centered within the text height (between the baseline and the ascent line), as in the standard Japanese Windows fonts; in vertically written text the dots are always centered horizontally. As the Japanese word for dot is pronounced \"ten\", the dots are colloquially called \"ten-ten-ten\" (てんてんてん, akin to the English \"dot dot dot\").", "title": "In different languages" }, { "paragraph_id": 25, "text": "In text in Japanese media, such as in manga or video games, ellipses are much more frequent than in English, and are often changed to another punctuation sign in translation. The ellipsis by itself represents speechlessness, or a \"pregnant pause\". Depending on the context, this could be anything from an admission of guilt to an expression of being dumbfounded at another person's words or actions. As a device, the ten-ten-ten is intended to focus the reader on a character while allowing the character to not speak any dialogue. This conveys to the reader a focus of the narrative \"camera\" on the silent subject, implying an expectation of some motion or action. It is not unheard of to see inanimate objects \"speaking\" the ellipsis.", "title": "In different languages" }, { "paragraph_id": 26, "text": "In Chinese, the ellipsis is six dots (in two groups of three dots, occupying the same horizontal or vertical space as two characters). In horizontally written text the dots are commonly vertically centered within the text height (between the baseline and the ascent line, i.e. ⋯⋯); in vertically written text the dots are always centered horizontally (i.e. Chinese: ︙︙).", "title": "In different languages" }, { "paragraph_id": 27, "text": "In Spanish, the ellipsis is commonly used as a substitute of et cetera at the end of unfinished lists. So it means \"and so forth\" or \"and other things\".", "title": "In different languages" }, { "paragraph_id": 28, "text": "Other use is the suspension of a part of a text, or a paragraph, or a phrase or a part of a word because it is obvious, or unnecessary, or implied. For instance, sometimes the ellipsis is used to avoid the complete use of expletives.", "title": "In different languages" }, { "paragraph_id": 29, "text": "When the ellipsis is placed alone into a parenthesis (...) or—less often—between brackets [...], which is what happens usually within a text transcription, it means the original text had more contents on the same position but are not useful to our target in the transcription. When the suppressed text is at the beginning or at the end of a text, the ellipsis does not need to be placed in a parenthesis.", "title": "In different languages" }, { "paragraph_id": 30, "text": "The number of dots is three and only three.", "title": "In different languages" }, { "paragraph_id": 31, "text": "In French, the ellipsis is commonly used at the end of lists to represent et cetera. In French typography, the ellipsis is written immediately after the preceding word, but has a space after it, for example: comme ça... pas comme ceci. If, exceptionally, it begins a sentence, there is a space before and after, for example: Lui ? ... vaut rien, je crois.... However, any omitted word, phrase or line at the end of a quoted passage would be indicated as follows: [...] (space before and after the square brackets but not inside), for example: ... à Paris, Nice, Nantes, Toulouse [...].", "title": "In different languages" }, { "paragraph_id": 32, "text": "In German, the ellipsis in general is surrounded by spaces, if it stands for one or more omitted words. On the other side there is no space between a letter or (part of) a word and an ellipsis, if it stands for one or more omitted letters, that should stick to the written letter or letters.", "title": "In different languages" }, { "paragraph_id": 33, "text": "Example for both cases, using German style: The first el...is stands for omitted letters, the second ... for an omitted word.", "title": "In different languages" }, { "paragraph_id": 34, "text": "If the ellipsis is at the end of a sentence, the final full stop is omitted.", "title": "In different languages" }, { "paragraph_id": 35, "text": "Example: I think that ...", "title": "In different languages" }, { "paragraph_id": 36, "text": "The Accademia della Crusca suggests the use of an ellipsis (\"puntini di sospensione\") to indicate a pause longer than a period and, when placed between brackets, the omission of letters, words or phrases.", "title": "In different languages" }, { "paragraph_id": 37, "text": "\"Tra le cose più preziose possedute da Andrea Sperelli era una coperta di seta fina, d’un colore azzurro disfatto, intorno a cui giravano i dodici segni dello Zodiaco in ricamo, con le denominazioni […] a caratteri gotici.\" (Gabriele D’Annunzio, Il piacere)", "title": "In different languages" }, { "paragraph_id": 38, "text": "In computer menu functions or buttons, an ellipsis means that upon selection more options (sometimes in the form of a dialog box) will be displayed, where the user can or must make a choice. If the ellipsis is absent, the function is immediately executed upon selection.", "title": "Usage in computer system menus" }, { "paragraph_id": 39, "text": "For example, the menu item \"Save\" indicates that the file will be overwritten without further input, whereas \"Save as...\" indicates that a dialog follows where the user can, for example, select another location, file name, or format.", "title": "Usage in computer system menus" }, { "paragraph_id": 40, "text": "Ellipses are also used as a separate button (particularly considering the limited screen area of mobile apps) to represent partially or completely hidden options. This usage may alternatively be described as a \"More button\" (see also hamburger button signifying completely hidden options).", "title": "Usage in computer system menus" }, { "paragraph_id": 41, "text": "In mobile, web, and general application design, the vertical ellipsis, ⋮, is sometimes used as an interface element, where it is sometimes called a kebab icon. The element typically indicates that a navigation menu can be accessed when the element is activated, and is a smaller version of the hamburger icon (≡) which is a stylized rendering of a menu.", "title": "Usage in computer system menus" }, { "paragraph_id": 42, "text": "An ellipsis is also often used in mathematics to mean \"and so forth\". In a list, between commas, or following a comma, a normal ellipsis is used, as in:", "title": "In mathematical notation" }, { "paragraph_id": 43, "text": "or to mean an infinite list, as:", "title": "In mathematical notation" }, { "paragraph_id": 44, "text": "To indicate the omission of values in a repeated operation, an ellipsis raised to the center of the line is used between two operation symbols or following the last operation symbol, as in:", "title": "In mathematical notation" }, { "paragraph_id": 45, "text": "Sometimes, e.g. in Russian mathematical texts, normal, non-raised, ellipses are used even in repeated summations.", "title": "In mathematical notation" }, { "paragraph_id": 46, "text": "The latter formula means the sum of all natural numbers from 1 to 100. However, it is not a formally defined mathematical symbol. Repeated summations or products may similarly be denoted using capital sigma and capital pi notation, respectively:", "title": "In mathematical notation" }, { "paragraph_id": 47, "text": "Normally dots should be used only where the pattern to be followed is clear, the exception being to show the indefinite continuation of an irrational number such as:", "title": "In mathematical notation" }, { "paragraph_id": 48, "text": "Sometimes, it is useful to display a formula compactly, for example:", "title": "In mathematical notation" }, { "paragraph_id": 49, "text": "Another example is the set of positive zeros of the cosine function:", "title": "In mathematical notation" }, { "paragraph_id": 50, "text": "There are many related uses of the ellipsis in set notation.", "title": "In mathematical notation" }, { "paragraph_id": 51, "text": "The diagonal and vertical forms of the ellipsis are particularly useful for showing missing terms in matrices, such as the size-n identity matrix:", "title": "In mathematical notation" }, { "paragraph_id": 52, "text": "A two- or three-dot ellipsis is used as an operator in some programming languages. One of its most common uses is in defining ranges or sequences, for instance 1..10 means all the numbers from 1 through 10. This is used in many languages, including Pascal, Modula, Oberon, Ada, Haskell, Perl, Ruby, Rust, Swift, Kotlin, Bash shell and F#. It is also used to indicate variadic functions in the C, C++ and Java languages.", "title": "Computer science" }, { "paragraph_id": 53, "text": "The CSS text-overflow property can be set to ellipsis, which cuts off text with an ellipsis when it overflows the content area.", "title": "Computer science" }, { "paragraph_id": 54, "text": "The ellipsis is a non-verbal cue that is often used in computer-mediated interactions, in particular in synchronous genres, such as chat. The reason behind its popularity is the fact that it allows people to indicate in writing several functions:", "title": "On Internet chat rooms and in text messaging" }, { "paragraph_id": 55, "text": "Although an ellipsis is technically complete with three periods (...), its rise in popularity as a \"trailing-off\" or \"silence\" indicator, particularly in mid-20th-century comic strip and comic book prose writing, has led to expanded uses online. Today, extended ellipses anywhere from two to dozens of periods have become common constructions in Internet chat rooms and text messages. The extent of repetition in itself might serve as an additional contextualization or paralinguistic cue; one paper wrote that they \"extend the lexical meaning of the words, add character to the sentences, and allow fine-tuning and personalisation of the message\".", "title": "On Internet chat rooms and in text messaging" }, { "paragraph_id": 56, "text": "In some text messaging software products, an ellipsis is displayed while the interlocutor is typing characters. The feature has been referred to as a typing awareness indicator.", "title": "On Internet chat rooms and in text messaging" }, { "paragraph_id": 57, "text": "Rows of dots are also used to indicate that a longer-lasting operation is in progress (e.g. in the initial startup messages of text-mode operating systems like DOS or in bootsectors, i.e. \"Loading...\", \"Starting...\"). Sometimes this is implemented as an animated progress indicator where more dots are added after certain sub-operations (like loading a single sector) have finished (i.e. \"Loading.....\").", "title": "Progress indicator" }, { "paragraph_id": 58, "text": "In computing, several ellipsis characters have been codified, depending on the system used.", "title": "Computer representations" }, { "paragraph_id": 59, "text": "In the Unicode standard, there are the following characters:", "title": "Computer representations" }, { "paragraph_id": 60, "text": "Unicode recognizes a series of three period characters (U+002E) as compatibility equivalent (though not canonical) to the horizontal ellipsis character.", "title": "Computer representations" }, { "paragraph_id": 61, "text": "In HTML, the horizontal ellipsis character may be represented by the entity reference &hellip; (since HTML 4.0), and the vertical ellipsis character by the entity reference &vellip; (since HTML 5.0). Alternatively, in HTML, XML, and SGML, a numeric character reference such as &#x2026; or &#8230; can be used.", "title": "Computer representations" }, { "paragraph_id": 62, "text": "In the TeX typesetting system, the following types of ellipsis are available:", "title": "Computer representations" }, { "paragraph_id": 63, "text": "In LaTeX, note that the reverse orientation of \\ddots can be achieved with \\reflectbox provided by the graphicx package: \\reflectbox{\\ddots} yields .", "title": "Computer representations" }, { "paragraph_id": 64, "text": "With the amsmath package from AMS-LaTeX, more specific ellipses are provided for math mode.", "title": "Computer representations" }, { "paragraph_id": 65, "text": "The horizontal ellipsis character also appears in the following older character maps:", "title": "Computer representations" }, { "paragraph_id": 66, "text": "Note that ISO/IEC 8859 encoding series provides no code point for ellipsis.", "title": "Computer representations" }, { "paragraph_id": 67, "text": "As with all characters, especially those outside the ASCII range, the author, sender and receiver of an encoded ellipsis must be in agreement upon what bytes are being used to represent the character. Naive text processing software may improperly assume that a particular encoding is being used, resulting in mojibake.", "title": "Computer representations" }, { "paragraph_id": 68, "text": "In Windows, the horizontal ellipsis can be inserted with Alt+0133, using the numeric keypad.", "title": "Computer representations" }, { "paragraph_id": 69, "text": "In macOS, it can be inserted with ⌥ Opt+; (on an English language keyboard).", "title": "Computer representations" }, { "paragraph_id": 70, "text": "In some Linux distributions, it can be inserted with AltGr+. (this produces an interpunct on other systems), or Compose...", "title": "Computer representations" }, { "paragraph_id": 71, "text": "In Android, ellipsis is a long-press key. If Gboard is in alphanumeric layout, change to numeric and special characters layout by pressing ?123 from alphanumeric layout. Once in numeric and special characters layout, long press . key to insert an ellipsis. This is a single symbol without spaces in between the three dots ( … ).", "title": "Computer representations" }, { "paragraph_id": 72, "text": "In Chinese and sometimes in Japanese, ellipsis characters are made by entering two consecutive horizontal ellipses, each with Unicode code point U+2026. In vertical texts, the application should rotate the symbol accordingly.", "title": "Computer representations" } ]
The ellipsis ... is a series of dots that indicates an intentional omission of a word, sentence, or whole section from a text without altering its original meaning. The plural is ellipses. The term originates from the Ancient Greek: ἔλλειψις, élleipsis meaning 'leave out'. Opinions differ as to how to render ellipses in printed material. According to The Chicago Manual of Style, it should consist of three periods, each separated from its neighbor by a non-breaking space: . . .. According to the AP Stylebook, the periods should be rendered with no space between them: .... A third option is to use the Unicode character U+2026 … HORIZONTAL ELLIPSIS.
2001-11-08T21:58:16Z
2023-11-18T23:59:25Z
[ "Template:Short description", "Template:Lang-zh", "Template:Code", "Template:Anli", "Template:Cite journal", "Template:Char", "Template:Reflist", "Template:Cite news", "Template:Key press", "Template:Cite web", "Template:Refend", "Template:About", "Template:Redirect", "Template:Lang-grc", "Template:Sc", "Template:Webarchive", "Template:Cite conference", "Template:Citation", "Template:Refbegin", "Template:Commons category inline", "Template:Navbox punctuation", "Template:Wiktionary inline", "Template:Unichar", "Template:Further", "Template:Cite encyclopedia", "Template:Cite book", "Template:Better source", "Template:Distinguish", "Template:IPAc-en", "Template:Lang", "Template:Blockquote", "Template:Bots", "Template:Use dmy dates", "Template:Infobox punctuation mark", "Template:Mdash" ]
https://en.wikipedia.org/wiki/Ellipsis
9,597
Enola Gay
The Enola Gay (/əˈnoʊlə/) is a Boeing B-29 Superfortress bomber, named after Enola Gay Tibbets, the mother of the pilot, Colonel Paul Tibbets. On 6 August 1945, during the final stages of World War II, it became the first aircraft to drop an atomic bomb in warfare. The bomb, code-named "Little Boy", was targeted at the city of Hiroshima, Japan, and caused the destruction of about three quarters of the city. Enola Gay participated in the second nuclear attack as the weather reconnaissance aircraft for the primary target of Kokura. Clouds and drifting smoke resulted in Nagasaki, a secondary target, being bombed instead. After the war, the Enola Gay returned to the United States, where it was operated from Roswell Army Air Field, New Mexico. In May 1946, it was flown to Kwajalein for the Operation Crossroads nuclear tests in the Pacific, but was not chosen to make the test drop at Bikini Atoll. Later that year, it was transferred to the Smithsonian Institution and spent many years parked at air bases exposed to the weather and souvenir hunters, before its 1961 disassembly and storage at a Smithsonian facility in Suitland, Maryland. In the 1980s, veterans groups engaged in a call for the Smithsonian to put the aircraft on display, leading to an acrimonious debate about exhibiting the aircraft without a proper historical context. The cockpit and nose section of the aircraft were exhibited at the National Air and Space Museum (NASM) on the National Mall, for the bombing's 50th anniversary in 1995, amid controversy. Since 2003, the entire restored B-29 has been on display at NASM's Steven F. Udvar-Hazy Center. The last survivor of its crew, Theodore Van Kirk, died on 28 July 2014 at the age of 93. The Enola Gay (Model number B-29-45-MO, Serial number 44-86292, Victor number 82) was built by the Glenn L. Martin Company (later part of Lockheed Martin) at its bomber plant in Bellevue, Nebraska, located at Offutt Field, now Offutt Air Force Base. The bomber was one of the first fifteen B-29s built to the "Silverplate" specification— of 65 eventually completed during and after World War II—giving them the primary ability to function as nuclear "weapon delivery" aircraft. These modifications included an extensively modified bomb bay with pneumatic doors and British bomb attachment and release systems, reversible pitch propellers that gave more braking power on landing, improved engines with fuel injection and better cooling, and the removal of protective armor and gun turrets. Enola Gay was personally selected by Colonel Paul W. Tibbets Jr., the commander of the 509th Composite Group, on 9 May 1945, while still on the assembly line. The aircraft was accepted by the United States Army Air Forces (USAAF) on 18 May 1945 and assigned to the 393d Bombardment Squadron, Heavy, 509th Composite Group. Crew B-9, commanded by Captain Robert A. Lewis, took delivery of the bomber and flew it from Omaha to the 509th base at Wendover Army Air Field, Utah, on 14 June 1945. Thirteen days later, the aircraft left Wendover for Guam, where it received a bomb-bay modification, and flew to North Field, Tinian, on 6 July. It was initially given the Victor (squadron-assigned identification) number 12, but on 1 August, was given the circle R tail markings of the 6th Bombardment Group as a security measure and had its Victor number changed to 82 to avoid misidentification with actual 6th Bombardment Group aircraft. During July, the bomber made eight practice or training flights and flew two missions, on 24 and 26 July, to drop pumpkin bombs on industrial targets at Kobe and Nagoya. Enola Gay was used on 31 July on a rehearsal flight for the actual mission. The partially assembled Little Boy gun-type fission weapon L-11, weighing 10,000 pounds (4,500 kg), was contained inside a 41-by-47-by-138-inch (100 cm × 120 cm × 350 cm) wooden crate that was secured to the deck of the USS Indianapolis. Unlike the six uranium-235 target discs, which were later flown to Tinian on three separate aircraft arriving 28 and 29 July, the assembled projectile with the nine uranium-235 rings installed was shipped in a single lead-lined steel container weighing 300 pounds (140 kg) that was locked to brackets welded to the deck of Captain Charles B. McVay III's quarters. Both the L-11 and projectile were dropped off at Tinian on 26 July 1945. On 5 August 1945, during preparation for the first atomic mission, Tibbets assumed command of the aircraft and named it after his mother, Enola Gay Tibbets, who, in turn, had been named for the heroine of a novel. When it came to selecting a name for the plane, Tibbets later recalled that: ... my thoughts turned at this point to my courageous red-haired mother, whose quiet confidence had been a source of strength to me since boyhood, and particularly during the soul-searching period when I decided to give up a medical career to become a military pilot. At a time when Dad had thought I had lost my marbles, she had taken my side and said, "I know you will be all right, son." In the early morning hours, just prior to the 6 August mission, Tibbets had a young Army Air Forces maintenance man, Private Nelson Miller, paint the name just under the pilot's window. Regularly assigned aircraft commander Robert A. Lewis was unhappy to be displaced by Tibbets for this important mission and became furious when he arrived at the aircraft on the morning of 6 August to see it painted with the now-famous nose art. Hiroshima was the primary target of the first nuclear bombing mission on 6 August, with Kokura and Nagasaki as alternative targets. Enola Gay, piloted by Tibbets, took off from North Field, in the Northern Mariana Islands, about six hours' flight time from Japan, accompanied by two other B-29s, The Great Artiste, carrying instrumentation, and a then-nameless aircraft later called Necessary Evil, commanded by Captain George Marquardt, to take photographs. The director of the Manhattan Project, Major General Leslie R. Groves Jr., wanted the event recorded for posterity, so the takeoff was illuminated by floodlights. When he wanted to taxi, Tibbets leaned out the window to direct the bystanders out of the way. On request, he gave a friendly wave for the cameras. After leaving Tinian, the three aircraft made their way separately to Iwo Jima, where they rendezvoused at 2,440 meters (8,010 ft) and set course for Japan. The aircraft arrived over the target in clear visibility at 9,855 meters (32,333 ft). Navy Captain William S. "Deak" Parsons of Project Alberta, who was in command of the mission, armed the bomb during the flight to minimize the risks during takeoff. His assistant, Second Lieutenant Morris R. Jeppson, removed the safety devices 30 minutes before reaching the target area. The release at 08:15 (Hiroshima time) went as planned, and the Little Boy took 53 seconds to fall from the aircraft flying at 31,060 feet (9,470 m) to the predetermined detonation height about 1,968 feet (600 m) above the city. Enola Gay traveled 11.5 mi (18.5 km) before it felt the shock waves from the blast. Although buffeted by the shock, neither Enola Gay nor The Great Artiste was damaged. The detonation created a blast equivalent to 15 kilotons of TNT (63 TJ). The U-235 weapon was considered very inefficient, with only 1.7% of its fissile material reacting. The radius of total destruction was about one mile (1.6 km), with resulting fires across 4.4 square miles (11 km). Americans estimated that 4.7 square miles (12 km) of the city were destroyed. Japanese officials determined that 69% of Hiroshima's buildings were destroyed and another 6–7% damaged. Some 70,000–80,000 people, 30% of the city's population, were killed by the blast and resultant firestorm, and another 70,000 injured. Out of those killed, 20,000 were soldiers and 20,000 were Korean slave laborers. Enola Gay returned safely to its base on Tinian to great fanfare, touching down at 2:58 pm, after 12 hours 13 minutes. The Great Artiste and Necessary Evil followed at short intervals. Several hundred people, including journalists and photographers, had gathered to watch the planes return. Tibbets was the first to disembark and was presented with the Distinguished Service Cross on the spot. The Hiroshima mission was followed by another atomic strike. Originally scheduled for 11 August, it was brought forward by two days to 9 August owing to a forecast of bad weather. This time, a nuclear bomb code-named "Fat Man" was carried by B-29 Bockscar, piloted by Major Charles W. Sweeney. Enola Gay, flown by Captain George Marquardt's Crew B-10, was the weather reconnaissance aircraft for Kokura, the primary target. Enola Gay reported clear skies over Kokura, but by the time Bockscar arrived, the city was obscured by smoke from fires from the conventional bombing of Yahata by 224 B-29s the day before. After three unsuccessful passes, Bockscar diverted to its secondary target, Nagasaki, where it dropped its bomb. In contrast to the Hiroshima mission, the Nagasaki mission has been described as tactically botched, although the mission did meet its objectives. The crew encountered a number of problems in execution and had very little fuel by the time they landed at the emergency backup landing site Yontan Airfield on Okinawa. Enola Gay's crew on 6 August 1945 consisted of 12 men. The crew was: Asterisks denote regular crewmen of the Enola Gay. Of mission commander Parsons, it was said: "There is no one more responsible for getting this bomb out of the laboratory and into some form useful for combat operations than Captain Parsons, by his plain genius in the ordnance business." For the Nagasaki mission, Enola Gay was flown by Crew B-10, normally assigned to Up An' Atom: Source: Campbell, 2005, pp. 134, 191–192. On 6 November 1945, Lewis flew the Enola Gay back to the United States, arriving at the 509th's new base at Roswell Army Air Field, New Mexico, on 8 November. On 29 April 1946, Enola Gay left Roswell as part of the Operation Crossroads nuclear weapons tests in the Pacific. It flew to Kwajalein Atoll on 1 May. It was not chosen to make the test drop at Bikini Atoll and left Kwajalein on 1 July, the date of the test, reaching Fairfield-Suisun Army Air Field, California, the next day. The decision was made to preserve the Enola Gay, and on 24 July 1946, the aircraft was flown to Davis–Monthan Air Force Base, Tucson, Arizona, in preparation for storage. On 30 August 1946, the title to the aircraft was transferred to the Smithsonian Institution and the Enola Gay was removed from the USAAF inventory. From 1946 to 1961, the Enola Gay was put into temporary storage at a number of locations. It was at Davis-Monthan from 1 September 1946 until 3 July 1949, when it was flown to Orchard Place Air Field, Park Ridge, Illinois, by Tibbets for acceptance by the Smithsonian. It was moved to Pyote Air Force Base, Texas, on 12 January 1952, and then to Andrews Air Force Base, Maryland, on 2 December 1953, because the Smithsonian had no storage space for the aircraft. It was hoped that the Air Force would guard the plane, but, lacking hangar space, it was left outdoors on a remote part of the air base, exposed to the elements. Souvenir hunters broke in and removed parts. Insects and birds then gained access to the aircraft. Paul E. Garber of the Smithsonian Institution became concerned about the Enola Gay's condition, and on 10 August 1960, Smithsonian staff began dismantling the aircraft. The components were transported to the Smithsonian storage facility at Suitland, Maryland, on 21 July 1961. The Enola Gay remained at Suitland for many years. By the early 1980s, two veterans of the 509th, Don Rehl and his former navigator in the 509th, Frank B. Stewart, began lobbying for the aircraft to be restored and put on display. They enlisted Tibbets and Senator Barry Goldwater in their campaign. In 1983, Walter J. Boyne, a former B-52 pilot with the Strategic Air Command, became director of the National Air and Space Museum, and he made the Enola Gay's restoration a priority. Looking at the aircraft, Tibbets recalled, was a "sad meeting. [My] fond memories, and I don't mean the dropping of the bomb, were the numerous occasions I flew the airplane ... I pushed it very, very hard and it never failed me ... It was probably the most beautiful piece of machinery that any pilot ever flew." "Enola Gay" is an anti-war song by the English electronic band Orchestral Manoeuvres in the Dark (OMD), and the only single taken from their second studio album Organisation (1980). Restoration of the bomber began on 5 December 1984, at the Paul E. Garber Preservation, Restoration, and Storage Facility in Suitland-Silver Hill, Maryland. The propellers that were used on the bombing mission were later shipped to Texas A&M University. One of these propellers was trimmed to 12.5 feet (3.8 m) for use in the university's Oran W. Nicks Low Speed Wind Tunnel. The lightweight aluminum variable-pitch propeller is powered by a 1,250 kVA electric motor, providing a wind speed up to 200 miles per hour (320 km/h). Two engines were rebuilt at Garber and two at San Diego Air & Space Museum. Some parts and instruments had been removed and could not be located. Replacements were found or fabricated, and marked so that future curators could distinguish them from the original components. The Enola Gay became the center of a controversy at the Smithsonian Institution when the museum planned to put its fuselage on public display in 1995 as part of an exhibit commemorating the 50th anniversary of the atomic bombing of Hiroshima. The exhibit, The Crossroads: The End of World War II, the Atomic Bomb and the Cold War, was drafted by the Smithsonian's National Air and Space Museum staff, and arranged around the restored Enola Gay. Critics of the planned exhibit, especially those of the American Legion and the Air Force Association, charged that the exhibit focused too much attention on the Japanese casualties inflicted by the nuclear bomb, rather than on the motives for the bombing or the discussion of the bomb's role in ending the conflict with Japan. The exhibit brought to national attention many long-standing academic and political issues related to retrospective views of the bombings. After attempts to revise the exhibit to meet the satisfaction of competing interest groups, the exhibit was canceled on 30 January 1995. Martin O. Harwit, Director of the National Air and Space Museum, was compelled to resign over the controversy. He later reflected that The dispute was not simply about the atomic bomb. Rather, the dispute was sometimes a symbolic issue in a "culture war" in which many Americans lumped together the seeming decline of American power, the difficulties of the domestic economy, the threats in world trade and especially Japan's successes, the loss of domestic jobs, and even changes in American gender roles and shifts in the American family. To a number of Americans, the very people responsible for the script were the people who were changing America. The bomb, representing the end of World War II and suggesting the height of American power was to be celebrated. It was, in this judgment, a crucial symbol of America's "good war", one fought justly for noble purposes at a time when America was united. Those who in any way questioned the bomb's use were, in this emotional framework, the enemies of America. The forward fuselage went on display on 28 June 1995. On 2 July 1995, three people were arrested for throwing ash and human blood on the aircraft's fuselage, following an earlier incident in which a protester had thrown red paint over the gallery's carpeting. The exhibition closed on 18 May 1998 and the fuselage was returned to the Garber Facility for final restoration. Its restoration work began in 1984, and eventually required 300,000 staff hours. While the fuselage was on display, from 1995 to 1998, work continued on the remaining unrestored components. The aircraft was shipped in pieces to the National Air and Space Museum's Steven F. Udvar-Hazy Center in Chantilly, Virginia from March–June 2003, with the fuselage and wings reunited for the first time since 1960 on 10 April 2003 and assembly completed on 8 August 2003. The aircraft has been on display at the Udvar-Hazy Center since the museum annex opened on 15 December 2003. As a result of the earlier controversy, the signage around the aircraft provided only the same succinct technical data as is provided for other aircraft in the museum, without discussion of the controversial issues. It read: Boeing's B-29 Superfortress was the most sophisticated propeller-driven bomber of World War II, and the first bomber to house its crew in pressurized compartments. Although designed to fight in the European theater, the B-29 found its niche on the other side of the globe. In the Pacific, B-29s delivered a variety of aerial weapons: conventional bombs, incendiary bombs, mines, and two nuclear weapons. On 6 August 1945, this Martin-built B-29-45-MO dropped the first atomic weapon used in combat on Hiroshima, Japan. Three days later, Bockscar (on display at the U.S. Air Force Museum near Dayton, Ohio) dropped a second atomic bomb on Nagasaki, Japan. Enola Gay flew as the advance weather reconnaissance aircraft that day. A third B-29, The Great Artiste, flew as an observation aircraft on both missions. Transferred from the U.S. Air Force Wingspan: 43 m (141 ft 3 in) Length: 30.2 m (99 ft) Height: 9 m (27 ft 9 in) Weight, empty: 32,580 kg (71,826 lb) Weight, gross: 63,504 kg (140,000 lb) Top speed: 546 km/h (339 mph) Engines: 4 Wright R-3350-57 Cyclone turbo-supercharged radials, 2,200 hp Crew: 12 (Hiroshima mission) Armament: two .50 caliber machine guns Ordnance: Little Boy atomic bomb Manufacturer: Martin Co., Omaha, Nebraska, 1945 A19500100000 The display of the Enola Gay without reference to the historical context of World War II, the Cold War, or the development and deployment of nuclear weapons aroused controversy. A petition from a group calling themselves the Committee for a National Discussion of Nuclear History and Current Policy bemoaned the display of Enola Gay as a technological achievement, which it described as an "extraordinary callousness toward the victims, indifference to the deep divisions among American citizens about the propriety of these actions, and disregard for the feelings of most of the world's peoples". It attracted signatures from notable figures including historian Gar Alperovitz, social critic Noam Chomsky, whistle blower Daniel Ellsberg, physicist Joseph Rotblat, writer Kurt Vonnegut, producer Norman Lear, actor Martin Sheen and filmmaker Oliver Stone. 38°54′39″N 77°26′39″W / 38.9108°N 77.4442°W / 38.9108; -77.4442
[ { "paragraph_id": 0, "text": "The Enola Gay (/əˈnoʊlə/) is a Boeing B-29 Superfortress bomber, named after Enola Gay Tibbets, the mother of the pilot, Colonel Paul Tibbets. On 6 August 1945, during the final stages of World War II, it became the first aircraft to drop an atomic bomb in warfare. The bomb, code-named \"Little Boy\", was targeted at the city of Hiroshima, Japan, and caused the destruction of about three quarters of the city. Enola Gay participated in the second nuclear attack as the weather reconnaissance aircraft for the primary target of Kokura. Clouds and drifting smoke resulted in Nagasaki, a secondary target, being bombed instead.", "title": "" }, { "paragraph_id": 1, "text": "After the war, the Enola Gay returned to the United States, where it was operated from Roswell Army Air Field, New Mexico. In May 1946, it was flown to Kwajalein for the Operation Crossroads nuclear tests in the Pacific, but was not chosen to make the test drop at Bikini Atoll. Later that year, it was transferred to the Smithsonian Institution and spent many years parked at air bases exposed to the weather and souvenir hunters, before its 1961 disassembly and storage at a Smithsonian facility in Suitland, Maryland.", "title": "" }, { "paragraph_id": 2, "text": "In the 1980s, veterans groups engaged in a call for the Smithsonian to put the aircraft on display, leading to an acrimonious debate about exhibiting the aircraft without a proper historical context. The cockpit and nose section of the aircraft were exhibited at the National Air and Space Museum (NASM) on the National Mall, for the bombing's 50th anniversary in 1995, amid controversy. Since 2003, the entire restored B-29 has been on display at NASM's Steven F. Udvar-Hazy Center. The last survivor of its crew, Theodore Van Kirk, died on 28 July 2014 at the age of 93.", "title": "" }, { "paragraph_id": 3, "text": "The Enola Gay (Model number B-29-45-MO, Serial number 44-86292, Victor number 82) was built by the Glenn L. Martin Company (later part of Lockheed Martin) at its bomber plant in Bellevue, Nebraska, located at Offutt Field, now Offutt Air Force Base. The bomber was one of the first fifteen B-29s built to the \"Silverplate\" specification— of 65 eventually completed during and after World War II—giving them the primary ability to function as nuclear \"weapon delivery\" aircraft. These modifications included an extensively modified bomb bay with pneumatic doors and British bomb attachment and release systems, reversible pitch propellers that gave more braking power on landing, improved engines with fuel injection and better cooling, and the removal of protective armor and gun turrets.", "title": "World War II" }, { "paragraph_id": 4, "text": "Enola Gay was personally selected by Colonel Paul W. Tibbets Jr., the commander of the 509th Composite Group, on 9 May 1945, while still on the assembly line. The aircraft was accepted by the United States Army Air Forces (USAAF) on 18 May 1945 and assigned to the 393d Bombardment Squadron, Heavy, 509th Composite Group. Crew B-9, commanded by Captain Robert A. Lewis, took delivery of the bomber and flew it from Omaha to the 509th base at Wendover Army Air Field, Utah, on 14 June 1945.", "title": "World War II" }, { "paragraph_id": 5, "text": "Thirteen days later, the aircraft left Wendover for Guam, where it received a bomb-bay modification, and flew to North Field, Tinian, on 6 July. It was initially given the Victor (squadron-assigned identification) number 12, but on 1 August, was given the circle R tail markings of the 6th Bombardment Group as a security measure and had its Victor number changed to 82 to avoid misidentification with actual 6th Bombardment Group aircraft. During July, the bomber made eight practice or training flights and flew two missions, on 24 and 26 July, to drop pumpkin bombs on industrial targets at Kobe and Nagoya. Enola Gay was used on 31 July on a rehearsal flight for the actual mission.", "title": "World War II" }, { "paragraph_id": 6, "text": "The partially assembled Little Boy gun-type fission weapon L-11, weighing 10,000 pounds (4,500 kg), was contained inside a 41-by-47-by-138-inch (100 cm × 120 cm × 350 cm) wooden crate that was secured to the deck of the USS Indianapolis. Unlike the six uranium-235 target discs, which were later flown to Tinian on three separate aircraft arriving 28 and 29 July, the assembled projectile with the nine uranium-235 rings installed was shipped in a single lead-lined steel container weighing 300 pounds (140 kg) that was locked to brackets welded to the deck of Captain Charles B. McVay III's quarters. Both the L-11 and projectile were dropped off at Tinian on 26 July 1945.", "title": "World War II" }, { "paragraph_id": 7, "text": "On 5 August 1945, during preparation for the first atomic mission, Tibbets assumed command of the aircraft and named it after his mother, Enola Gay Tibbets, who, in turn, had been named for the heroine of a novel. When it came to selecting a name for the plane, Tibbets later recalled that:", "title": "World War II" }, { "paragraph_id": 8, "text": "... my thoughts turned at this point to my courageous red-haired mother, whose quiet confidence had been a source of strength to me since boyhood, and particularly during the soul-searching period when I decided to give up a medical career to become a military pilot. At a time when Dad had thought I had lost my marbles, she had taken my side and said, \"I know you will be all right, son.\"", "title": "World War II" }, { "paragraph_id": 9, "text": "In the early morning hours, just prior to the 6 August mission, Tibbets had a young Army Air Forces maintenance man, Private Nelson Miller, paint the name just under the pilot's window. Regularly assigned aircraft commander Robert A. Lewis was unhappy to be displaced by Tibbets for this important mission and became furious when he arrived at the aircraft on the morning of 6 August to see it painted with the now-famous nose art.", "title": "World War II" }, { "paragraph_id": 10, "text": "Hiroshima was the primary target of the first nuclear bombing mission on 6 August, with Kokura and Nagasaki as alternative targets. Enola Gay, piloted by Tibbets, took off from North Field, in the Northern Mariana Islands, about six hours' flight time from Japan, accompanied by two other B-29s, The Great Artiste, carrying instrumentation, and a then-nameless aircraft later called Necessary Evil, commanded by Captain George Marquardt, to take photographs. The director of the Manhattan Project, Major General Leslie R. Groves Jr., wanted the event recorded for posterity, so the takeoff was illuminated by floodlights. When he wanted to taxi, Tibbets leaned out the window to direct the bystanders out of the way. On request, he gave a friendly wave for the cameras.", "title": "World War II" }, { "paragraph_id": 11, "text": "After leaving Tinian, the three aircraft made their way separately to Iwo Jima, where they rendezvoused at 2,440 meters (8,010 ft) and set course for Japan. The aircraft arrived over the target in clear visibility at 9,855 meters (32,333 ft). Navy Captain William S. \"Deak\" Parsons of Project Alberta, who was in command of the mission, armed the bomb during the flight to minimize the risks during takeoff. His assistant, Second Lieutenant Morris R. Jeppson, removed the safety devices 30 minutes before reaching the target area.", "title": "World War II" }, { "paragraph_id": 12, "text": "The release at 08:15 (Hiroshima time) went as planned, and the Little Boy took 53 seconds to fall from the aircraft flying at 31,060 feet (9,470 m) to the predetermined detonation height about 1,968 feet (600 m) above the city. Enola Gay traveled 11.5 mi (18.5 km) before it felt the shock waves from the blast. Although buffeted by the shock, neither Enola Gay nor The Great Artiste was damaged.", "title": "World War II" }, { "paragraph_id": 13, "text": "The detonation created a blast equivalent to 15 kilotons of TNT (63 TJ). The U-235 weapon was considered very inefficient, with only 1.7% of its fissile material reacting. The radius of total destruction was about one mile (1.6 km), with resulting fires across 4.4 square miles (11 km). Americans estimated that 4.7 square miles (12 km) of the city were destroyed. Japanese officials determined that 69% of Hiroshima's buildings were destroyed and another 6–7% damaged. Some 70,000–80,000 people, 30% of the city's population, were killed by the blast and resultant firestorm, and another 70,000 injured. Out of those killed, 20,000 were soldiers and 20,000 were Korean slave laborers.", "title": "World War II" }, { "paragraph_id": 14, "text": "Enola Gay returned safely to its base on Tinian to great fanfare, touching down at 2:58 pm, after 12 hours 13 minutes. The Great Artiste and Necessary Evil followed at short intervals. Several hundred people, including journalists and photographers, had gathered to watch the planes return. Tibbets was the first to disembark and was presented with the Distinguished Service Cross on the spot.", "title": "World War II" }, { "paragraph_id": 15, "text": "The Hiroshima mission was followed by another atomic strike. Originally scheduled for 11 August, it was brought forward by two days to 9 August owing to a forecast of bad weather. This time, a nuclear bomb code-named \"Fat Man\" was carried by B-29 Bockscar, piloted by Major Charles W. Sweeney. Enola Gay, flown by Captain George Marquardt's Crew B-10, was the weather reconnaissance aircraft for Kokura, the primary target. Enola Gay reported clear skies over Kokura, but by the time Bockscar arrived, the city was obscured by smoke from fires from the conventional bombing of Yahata by 224 B-29s the day before. After three unsuccessful passes, Bockscar diverted to its secondary target, Nagasaki, where it dropped its bomb. In contrast to the Hiroshima mission, the Nagasaki mission has been described as tactically botched, although the mission did meet its objectives. The crew encountered a number of problems in execution and had very little fuel by the time they landed at the emergency backup landing site Yontan Airfield on Okinawa.", "title": "World War II" }, { "paragraph_id": 16, "text": "Enola Gay's crew on 6 August 1945 consisted of 12 men. The crew was:", "title": "Crews" }, { "paragraph_id": 17, "text": "Asterisks denote regular crewmen of the Enola Gay.", "title": "Crews" }, { "paragraph_id": 18, "text": "Of mission commander Parsons, it was said: \"There is no one more responsible for getting this bomb out of the laboratory and into some form useful for combat operations than Captain Parsons, by his plain genius in the ordnance business.\"", "title": "Crews" }, { "paragraph_id": 19, "text": "For the Nagasaki mission, Enola Gay was flown by Crew B-10, normally assigned to Up An' Atom:", "title": "Crews" }, { "paragraph_id": 20, "text": "Source: Campbell, 2005, pp. 134, 191–192.", "title": "Crews" }, { "paragraph_id": 21, "text": "On 6 November 1945, Lewis flew the Enola Gay back to the United States, arriving at the 509th's new base at Roswell Army Air Field, New Mexico, on 8 November. On 29 April 1946, Enola Gay left Roswell as part of the Operation Crossroads nuclear weapons tests in the Pacific. It flew to Kwajalein Atoll on 1 May. It was not chosen to make the test drop at Bikini Atoll and left Kwajalein on 1 July, the date of the test, reaching Fairfield-Suisun Army Air Field, California, the next day.", "title": "Subsequent history" }, { "paragraph_id": 22, "text": "The decision was made to preserve the Enola Gay, and on 24 July 1946, the aircraft was flown to Davis–Monthan Air Force Base, Tucson, Arizona, in preparation for storage. On 30 August 1946, the title to the aircraft was transferred to the Smithsonian Institution and the Enola Gay was removed from the USAAF inventory. From 1946 to 1961, the Enola Gay was put into temporary storage at a number of locations. It was at Davis-Monthan from 1 September 1946 until 3 July 1949, when it was flown to Orchard Place Air Field, Park Ridge, Illinois, by Tibbets for acceptance by the Smithsonian. It was moved to Pyote Air Force Base, Texas, on 12 January 1952, and then to Andrews Air Force Base, Maryland, on 2 December 1953, because the Smithsonian had no storage space for the aircraft.", "title": "Subsequent history" }, { "paragraph_id": 23, "text": "It was hoped that the Air Force would guard the plane, but, lacking hangar space, it was left outdoors on a remote part of the air base, exposed to the elements. Souvenir hunters broke in and removed parts. Insects and birds then gained access to the aircraft. Paul E. Garber of the Smithsonian Institution became concerned about the Enola Gay's condition, and on 10 August 1960, Smithsonian staff began dismantling the aircraft. The components were transported to the Smithsonian storage facility at Suitland, Maryland, on 21 July 1961.", "title": "Subsequent history" }, { "paragraph_id": 24, "text": "The Enola Gay remained at Suitland for many years. By the early 1980s, two veterans of the 509th, Don Rehl and his former navigator in the 509th, Frank B. Stewart, began lobbying for the aircraft to be restored and put on display. They enlisted Tibbets and Senator Barry Goldwater in their campaign. In 1983, Walter J. Boyne, a former B-52 pilot with the Strategic Air Command, became director of the National Air and Space Museum, and he made the Enola Gay's restoration a priority. Looking at the aircraft, Tibbets recalled, was a \"sad meeting. [My] fond memories, and I don't mean the dropping of the bomb, were the numerous occasions I flew the airplane ... I pushed it very, very hard and it never failed me ... It was probably the most beautiful piece of machinery that any pilot ever flew.\"", "title": "Subsequent history" }, { "paragraph_id": 25, "text": "\"Enola Gay\" is an anti-war song by the English electronic band Orchestral Manoeuvres in the Dark (OMD), and the only single taken from their second studio album Organisation (1980).", "title": "In Popular Culture" }, { "paragraph_id": 26, "text": "Restoration of the bomber began on 5 December 1984, at the Paul E. Garber Preservation, Restoration, and Storage Facility in Suitland-Silver Hill, Maryland. The propellers that were used on the bombing mission were later shipped to Texas A&M University. One of these propellers was trimmed to 12.5 feet (3.8 m) for use in the university's Oran W. Nicks Low Speed Wind Tunnel. The lightweight aluminum variable-pitch propeller is powered by a 1,250 kVA electric motor, providing a wind speed up to 200 miles per hour (320 km/h). Two engines were rebuilt at Garber and two at San Diego Air & Space Museum. Some parts and instruments had been removed and could not be located. Replacements were found or fabricated, and marked so that future curators could distinguish them from the original components.", "title": "Restoration" }, { "paragraph_id": 27, "text": "The Enola Gay became the center of a controversy at the Smithsonian Institution when the museum planned to put its fuselage on public display in 1995 as part of an exhibit commemorating the 50th anniversary of the atomic bombing of Hiroshima. The exhibit, The Crossroads: The End of World War II, the Atomic Bomb and the Cold War, was drafted by the Smithsonian's National Air and Space Museum staff, and arranged around the restored Enola Gay.", "title": "Restoration" }, { "paragraph_id": 28, "text": "Critics of the planned exhibit, especially those of the American Legion and the Air Force Association, charged that the exhibit focused too much attention on the Japanese casualties inflicted by the nuclear bomb, rather than on the motives for the bombing or the discussion of the bomb's role in ending the conflict with Japan. The exhibit brought to national attention many long-standing academic and political issues related to retrospective views of the bombings. After attempts to revise the exhibit to meet the satisfaction of competing interest groups, the exhibit was canceled on 30 January 1995. Martin O. Harwit, Director of the National Air and Space Museum, was compelled to resign over the controversy. He later reflected that", "title": "Restoration" }, { "paragraph_id": 29, "text": "The dispute was not simply about the atomic bomb. Rather, the dispute was sometimes a symbolic issue in a \"culture war\" in which many Americans lumped together the seeming decline of American power, the difficulties of the domestic economy, the threats in world trade and especially Japan's successes, the loss of domestic jobs, and even changes in American gender roles and shifts in the American family. To a number of Americans, the very people responsible for the script were the people who were changing America. The bomb, representing the end of World War II and suggesting the height of American power was to be celebrated. It was, in this judgment, a crucial symbol of America's \"good war\", one fought justly for noble purposes at a time when America was united. Those who in any way questioned the bomb's use were, in this emotional framework, the enemies of America.", "title": "Restoration" }, { "paragraph_id": 30, "text": "The forward fuselage went on display on 28 June 1995. On 2 July 1995, three people were arrested for throwing ash and human blood on the aircraft's fuselage, following an earlier incident in which a protester had thrown red paint over the gallery's carpeting. The exhibition closed on 18 May 1998 and the fuselage was returned to the Garber Facility for final restoration.", "title": "Restoration" }, { "paragraph_id": 31, "text": "Its restoration work began in 1984, and eventually required 300,000 staff hours. While the fuselage was on display, from 1995 to 1998, work continued on the remaining unrestored components. The aircraft was shipped in pieces to the National Air and Space Museum's Steven F. Udvar-Hazy Center in Chantilly, Virginia from March–June 2003, with the fuselage and wings reunited for the first time since 1960 on 10 April 2003 and assembly completed on 8 August 2003. The aircraft has been on display at the Udvar-Hazy Center since the museum annex opened on 15 December 2003. As a result of the earlier controversy, the signage around the aircraft provided only the same succinct technical data as is provided for other aircraft in the museum, without discussion of the controversial issues. It read:", "title": "Restoration" }, { "paragraph_id": 32, "text": "Boeing's B-29 Superfortress was the most sophisticated propeller-driven bomber of World War II, and the first bomber to house its crew in pressurized compartments. Although designed to fight in the European theater, the B-29 found its niche on the other side of the globe. In the Pacific, B-29s delivered a variety of aerial weapons: conventional bombs, incendiary bombs, mines, and two nuclear weapons.", "title": "Restoration" }, { "paragraph_id": 33, "text": "On 6 August 1945, this Martin-built B-29-45-MO dropped the first atomic weapon used in combat on Hiroshima, Japan. Three days later, Bockscar (on display at the U.S. Air Force Museum near Dayton, Ohio) dropped a second atomic bomb on Nagasaki, Japan. Enola Gay flew as the advance weather reconnaissance aircraft that day. A third B-29, The Great Artiste, flew as an observation aircraft on both missions.", "title": "Restoration" }, { "paragraph_id": 34, "text": "Transferred from the U.S. Air Force", "title": "Restoration" }, { "paragraph_id": 35, "text": "Wingspan: 43 m (141 ft 3 in) Length: 30.2 m (99 ft) Height: 9 m (27 ft 9 in) Weight, empty: 32,580 kg (71,826 lb) Weight, gross: 63,504 kg (140,000 lb) Top speed: 546 km/h (339 mph) Engines: 4 Wright R-3350-57 Cyclone turbo-supercharged radials, 2,200 hp Crew: 12 (Hiroshima mission) Armament: two .50 caliber machine guns Ordnance: Little Boy atomic bomb Manufacturer: Martin Co., Omaha, Nebraska, 1945 A19500100000", "title": "Restoration" }, { "paragraph_id": 36, "text": "The display of the Enola Gay without reference to the historical context of World War II, the Cold War, or the development and deployment of nuclear weapons aroused controversy. A petition from a group calling themselves the Committee for a National Discussion of Nuclear History and Current Policy bemoaned the display of Enola Gay as a technological achievement, which it described as an \"extraordinary callousness toward the victims, indifference to the deep divisions among American citizens about the propriety of these actions, and disregard for the feelings of most of the world's peoples\". It attracted signatures from notable figures including historian Gar Alperovitz, social critic Noam Chomsky, whistle blower Daniel Ellsberg, physicist Joseph Rotblat, writer Kurt Vonnegut, producer Norman Lear, actor Martin Sheen and filmmaker Oliver Stone.", "title": "Restoration" }, { "paragraph_id": 37, "text": "38°54′39″N 77°26′39″W / 38.9108°N 77.4442°W / 38.9108; -77.4442", "title": "External links" } ]
The Enola Gay is a Boeing B-29 Superfortress bomber, named after Enola Gay Tibbets, the mother of the pilot, Colonel Paul Tibbets. On 6 August 1945, during the final stages of World War II, it became the first aircraft to drop an atomic bomb in warfare. The bomb, code-named "Little Boy", was targeted at the city of Hiroshima, Japan, and caused the destruction of about three quarters of the city. Enola Gay participated in the second nuclear attack as the weather reconnaissance aircraft for the primary target of Kokura. Clouds and drifting smoke resulted in Nagasaki, a secondary target, being bombed instead. After the war, the Enola Gay returned to the United States, where it was operated from Roswell Army Air Field, New Mexico. In May 1946, it was flown to Kwajalein for the Operation Crossroads nuclear tests in the Pacific, but was not chosen to make the test drop at Bikini Atoll. Later that year, it was transferred to the Smithsonian Institution and spent many years parked at air bases exposed to the weather and souvenir hunters, before its 1961 disassembly and storage at a Smithsonian facility in Suitland, Maryland. In the 1980s, veterans groups engaged in a call for the Smithsonian to put the aircraft on display, leading to an acrimonious debate about exhibiting the aircraft without a proper historical context. The cockpit and nose section of the aircraft were exhibited at the National Air and Space Museum (NASM) on the National Mall, for the bombing's 50th anniversary in 1995, amid controversy. Since 2003, the entire restored B-29 has been on display at NASM's Steven F. Udvar-Hazy Center. The last survivor of its crew, Theodore Van Kirk, died on 28 July 2014 at the age of 93.
2001-10-28T01:16:28Z
2023-12-18T00:31:33Z
[ "Template:Cite book", "Template:Commons category", "Template:Manhattan Project", "Template:USS", "Template:Cite journal", "Template:Sfn", "Template:Refend", "Template:Blockquote", "Template:Small", "Template:Cite news", "Template:Refbegin", "Template:Short description", "Template:Convert", "Template:Infobox aircraft career", "Template:Coord", "Template:IPAc-en", "Template:Refn", "Template:Cbignore", "Template:Dead link", "Template:Italic title", "Template:Clear", "Template:Cite web", "Template:'", "Template:Reflist", "Template:Good article", "Template:Infobox aircraft begin", "Template:Main", "Template:Webarchive", "Template:B-29 family", "Template:Portal bar", "Template:About", "Template:Use dmy dates" ]
https://en.wikipedia.org/wiki/Enola_Gay
9,598
Electronvolt
In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the measure of an amount of kinetic energy gained by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equivalent to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 redefinition of the SI base units, this sets 1 eV equal to the exact value 1.602176634×10 J. Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge q gains an energy E = qV after passing through a voltage of V. Since q must be an integer multiple of the elementary charge e for any isolated particle, the gained energy in units of electronvolts conveniently equals that integer times the voltage. It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics, and high-energy astrophysics. It is commonly used with SI prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion (10) electronvolts; it is equivalent to the GeV. An electronvolt is the amount of kinetic energy gained or lost by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. Hence, it has a value of one volt, 1 J/C, multiplied by the elementary charge e = 1.602176634×10 C. Therefore, one electronvolt is equal to 1.602176634×10 J. The electronvolt (eV) is a unit of energy, but is not an SI unit. The SI unit of energy is the joule (J). By mass–energy equivalence, the electronvolt corresponds to a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c, where c is the speed of light in vacuum (from E = mc). It is common to informally express mass in terms of eV as a unit of mass, effectively using a system of natural units with c set to 1. The kilogram equivalent of 1 eV/c is: For example, an electron and a positron, each with a mass of 0.511 MeV/c, can annihilate to yield 1.022 MeV of energy. A proton has a mass of 0.938 GeV/c. In general, the masses of all hadrons are of the order of 1 GeV/c, which makes the GeV/c a convenient unit of mass for particle physics: The atomic mass constant (mu), one twelfth of the mass a carbon-12 atom, is close to the mass of a proton. To convert to electronvolt mass-equivalent, use the formula: By dividing a particle's kinetic energy in electronvolts by the fundamental constant c (the speed of light), one can describe the particle's momentum in units of eV/c. In natural units in which the fundamental velocity constant c is numerically 1, the c may informally be omitted to express momentum as electronvolts. The energy momentum relation in natural units (with c = 1 {\displaystyle c=1} ) is a Pythagorean equation. When a relatively high energy is applied to a particle with relatively low rest mass, it can be approximated as E ≃ p {\displaystyle E\simeq p} in high-energy physics such that an applied energy in units of eV conveniently results in an approximately equivalent change of momentum in units of eV/c. The dimensions of momentum units are TLM. The dimensions of energy units are TLM. Dividing the units of energy (such as eV) by a fundamental constant (such as the speed of light) that has units of velocity (TL) facilitates the required conversion for using energy units to describe momentum. For example, if the momentum p of an electron is said to be 1 GeV, then the conversion to MKS system of units can be achieved by: In particle physics, a system of natural units in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: c = ħ = 1. In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses. Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following: The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via Γ = ħ/τ. For example, the B meson has a lifetime of 1.530(9) picoseconds, mean decay length is cτ = 459.7 μm, or a decay width of (4.302±25)×10 eV. Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds. Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy: In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale: where kB is the Boltzmann constant. The kB is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is 15 keV (kiloelectronvolt), which is equal to 174 MK (megakelvin). As an approximation: kBT is about 0.025 eV (≈ 290 K/11604 K/eV) at a temperature of 20 °C. The energy E, frequency v, and wavelength λ of a photon are related by where h is the Planck constant, c is the speed of light. This reduces to A photon with a wavelength of 532 nm (green light) would have an energy of approximately 2.33 eV. Similarly, 1 eV would correspond to an infrared photon of wavelength 1240 nm or frequency 241.8 THz. In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material. One mole of particles given 1 eV of energy each has approximately 96.5 kJ of energy – this corresponds to the Faraday constant (F ≈ 96485 C⋅mol), where the energy in joules of n moles of particles each with energy E eV is equal to E·F·n.
[ { "paragraph_id": 0, "text": "In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the measure of an amount of kinetic energy gained by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equivalent to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 redefinition of the SI base units, this sets 1 eV equal to the exact value 1.602176634×10 J.", "title": "" }, { "paragraph_id": 1, "text": "Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge q gains an energy E = qV after passing through a voltage of V. Since q must be an integer multiple of the elementary charge e for any isolated particle, the gained energy in units of electronvolts conveniently equals that integer times the voltage.", "title": "" }, { "paragraph_id": 2, "text": "It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics, and high-energy astrophysics. It is commonly used with SI prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion (10) electronvolts; it is equivalent to the GeV.", "title": "" }, { "paragraph_id": 3, "text": "An electronvolt is the amount of kinetic energy gained or lost by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. Hence, it has a value of one volt, 1 J/C, multiplied by the elementary charge e = 1.602176634×10 C. Therefore, one electronvolt is equal to 1.602176634×10 J.", "title": "Definition" }, { "paragraph_id": 4, "text": "The electronvolt (eV) is a unit of energy, but is not an SI unit. The SI unit of energy is the joule (J).", "title": "Definition" }, { "paragraph_id": 5, "text": "", "title": "Relation to other physical properties and units" }, { "paragraph_id": 6, "text": "By mass–energy equivalence, the electronvolt corresponds to a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c, where c is the speed of light in vacuum (from E = mc). It is common to informally express mass in terms of eV as a unit of mass, effectively using a system of natural units with c set to 1. The kilogram equivalent of 1 eV/c is:", "title": "Relation to other physical properties and units" }, { "paragraph_id": 7, "text": "For example, an electron and a positron, each with a mass of 0.511 MeV/c, can annihilate to yield 1.022 MeV of energy. A proton has a mass of 0.938 GeV/c. In general, the masses of all hadrons are of the order of 1 GeV/c, which makes the GeV/c a convenient unit of mass for particle physics:", "title": "Relation to other physical properties and units" }, { "paragraph_id": 8, "text": "The atomic mass constant (mu), one twelfth of the mass a carbon-12 atom, is close to the mass of a proton. To convert to electronvolt mass-equivalent, use the formula:", "title": "Relation to other physical properties and units" }, { "paragraph_id": 9, "text": "By dividing a particle's kinetic energy in electronvolts by the fundamental constant c (the speed of light), one can describe the particle's momentum in units of eV/c. In natural units in which the fundamental velocity constant c is numerically 1, the c may informally be omitted to express momentum as electronvolts.", "title": "Relation to other physical properties and units" }, { "paragraph_id": 10, "text": "The energy momentum relation", "title": "Relation to other physical properties and units" }, { "paragraph_id": 11, "text": "in natural units (with c = 1 {\\displaystyle c=1} )", "title": "Relation to other physical properties and units" }, { "paragraph_id": 12, "text": "is a Pythagorean equation. When a relatively high energy is applied to a particle with relatively low rest mass, it can be approximated as E ≃ p {\\displaystyle E\\simeq p} in high-energy physics such that an applied energy in units of eV conveniently results in an approximately equivalent change of momentum in units of eV/c.", "title": "Relation to other physical properties and units" }, { "paragraph_id": 13, "text": "The dimensions of momentum units are TLM. The dimensions of energy units are TLM. Dividing the units of energy (such as eV) by a fundamental constant (such as the speed of light) that has units of velocity (TL) facilitates the required conversion for using energy units to describe momentum.", "title": "Relation to other physical properties and units" }, { "paragraph_id": 14, "text": "For example, if the momentum p of an electron is said to be 1 GeV, then the conversion to MKS system of units can be achieved by:", "title": "Relation to other physical properties and units" }, { "paragraph_id": 15, "text": "In particle physics, a system of natural units in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: c = ħ = 1. In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses.", "title": "Relation to other physical properties and units" }, { "paragraph_id": 16, "text": "Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following:", "title": "Relation to other physical properties and units" }, { "paragraph_id": 17, "text": "The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via Γ = ħ/τ. For example, the B meson has a lifetime of 1.530(9) picoseconds, mean decay length is cτ = 459.7 μm, or a decay width of (4.302±25)×10 eV.", "title": "Relation to other physical properties and units" }, { "paragraph_id": 18, "text": "Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds.", "title": "Relation to other physical properties and units" }, { "paragraph_id": 19, "text": "Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy:", "title": "Relation to other physical properties and units" }, { "paragraph_id": 20, "text": "In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale:", "title": "Relation to other physical properties and units" }, { "paragraph_id": 21, "text": "where kB is the Boltzmann constant.", "title": "Relation to other physical properties and units" }, { "paragraph_id": 22, "text": "The kB is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is 15 keV (kiloelectronvolt), which is equal to 174 MK (megakelvin).", "title": "Relation to other physical properties and units" }, { "paragraph_id": 23, "text": "As an approximation: kBT is about 0.025 eV (≈ 290 K/11604 K/eV) at a temperature of 20 °C.", "title": "Relation to other physical properties and units" }, { "paragraph_id": 24, "text": "The energy E, frequency v, and wavelength λ of a photon are related by", "title": "Relation to other physical properties and units" }, { "paragraph_id": 25, "text": "where h is the Planck constant, c is the speed of light. This reduces to", "title": "Relation to other physical properties and units" }, { "paragraph_id": 26, "text": "A photon with a wavelength of 532 nm (green light) would have an energy of approximately 2.33 eV. Similarly, 1 eV would correspond to an infrared photon of wavelength 1240 nm or frequency 241.8 THz.", "title": "Relation to other physical properties and units" }, { "paragraph_id": 27, "text": "In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the \"electron equivalent\" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material.", "title": "Scattering experiments" }, { "paragraph_id": 28, "text": "One mole of particles given 1 eV of energy each has approximately 96.5 kJ of energy – this corresponds to the Faraday constant (F ≈ 96485 C⋅mol), where the energy in joules of n moles of particles each with energy E eV is equal to E·F·n.", "title": "Energy comparisons" } ]
In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the measure of an amount of kinetic energy gained by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equivalent to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 redefinition of the SI base units, this sets 1 eV equal to the exact value 1.602176634×10−19 J. Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge q gains an energy E = qV after passing through a voltage of V. Since q must be an integer multiple of the elementary charge e for any isolated particle, the gained energy in units of electronvolts conveniently equals that integer times the voltage. It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics, and high-energy astrophysics. It is commonly used with SI prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion (109) electronvolts; it is equivalent to the GeV.
2001-07-26T20:06:40Z
2023-12-29T10:51:01Z
[ "Template:Short description", "Template:Hatnote", "Template:Physconst", "Template:Block indent", "Template:Nowrap", "Template:Subatomic particle", "Template:Reflist", "Template:Cite web", "Template:Webarchive", "Template:Val", "Template:Cite journal", "Template:Dimanalysis", "Template:Sfrac", "Template:SI units" ]
https://en.wikipedia.org/wiki/Electronvolt
9,601
Electrochemistry
Electrochemistry is the branch of physical chemistry concerned with the relationship between electrical potential difference and identifiable chemical change. These reactions involve electrons moving via an electronically-conducting phase (typically an external electrical circuit, but not necessarily, as in electroless plating) between electrodes separated by an ionically conducting and electronically insulating electrolyte (or ionic species in a solution). When a chemical reaction is driven by an electrical potential difference, as in electrolysis, or if a potential difference results from a chemical reaction as in an electric battery or fuel cell, it is called an electrochemical reaction. Unlike in other chemical reactions, in electrochemical reactions electrons are not transferred directly between atoms, ions, or molecules, but via the aforementioned electronically-conducting circuit. This phenomenon is what distinguishes an electrochemical reaction from a conventional chemical reaction. Understanding of electrical matters began in the sixteenth century. During this century, the English scientist William Gilbert spent 17 years experimenting with magnetism and, to a lesser extent, electricity. For his work on magnets, Gilbert became known as the "Father of Magnetism." He discovered various methods for producing and strengthening magnets. In 1663, the German physicist Otto von Guericke created the first electric generator, which produced static electricity by applying friction in the machine. The generator was made of a large sulfur ball cast inside a glass globe, mounted on a shaft. The ball was rotated by means of a crank and an electric spark was produced when a pad was rubbed against the ball as it rotated. The globe could be removed and used as source for experiments with electricity. By the mid-18th century the French chemist Charles François de Cisternay du Fay had discovered two types of static electricity, and that like charges repel each other whilst unlike charges attract. Du Fay announced that electricity consisted of two fluids: "vitreous" (from the Latin for "glass"), or positive, electricity; and "resinous," or negative, electricity. This was the two-fluid theory of electricity, which was to be opposed by Benjamin Franklin's one-fluid theory later in the century. In 1785, Charles-Augustin de Coulomb developed the law of electrostatic attraction as an outgrowth of his attempt to investigate the law of electrical repulsions as stated by Joseph Priestley in England. In the late 18th century the Italian physician and anatomist Luigi Galvani marked the birth of electrochemistry by establishing a bridge between chemical reactions and electricity on his essay "De Viribus Electricitatis in Motu Musculari Commentarius" (Latin for Commentary on the Effect of Electricity on Muscular Motion) in 1791 where he proposed a "nerveo-electrical substance" on biological life forms. In his essay Galvani concluded that animal tissue contained a here-to-fore neglected innate, vital force, which he termed "animal electricity," which activated nerves and muscles spanned by metal probes. He believed that this new force was a form of electricity in addition to the "natural" form produced by lightning or by the electric eel and torpedo ray as well as the "artificial" form produced by friction (i.e., static electricity). Galvani's scientific colleagues generally accepted his views, but Alessandro Volta rejected the idea of an "animal electric fluid," replying that the frog's legs responded to differences in metal temper, composition, and bulk. Galvani refuted this by obtaining muscular action with two pieces of the same material. Nevertheless, Volta's experimentation led him to develop the first practical battery, which took advantage of the relatively high energy (weak bonding) of zinc and could deliver an electrical current for much longer than any other device known at the time. In 1800, William Nicholson and Johann Wilhelm Ritter succeeded in decomposing water into hydrogen and oxygen by electrolysis using Volta's battery. Soon thereafter Ritter discovered the process of electroplating. He also observed that the amount of metal deposited and the amount of oxygen produced during an electrolytic process depended on the distance between the electrodes. By 1801, Ritter observed thermoelectric currents and anticipated the discovery of thermoelectricity by Thomas Johann Seebeck. By the 1810s, William Hyde Wollaston made improvements to the galvanic cell. Sir Humphry Davy's work with electrolysis led to the conclusion that the production of electricity in simple electrolytic cells resulted from chemical action and that chemical combination occurred between substances of opposite charge. This work led directly to the isolation of metallic sodium and potassium by electrolysis of their molten salts, and of the alkaline earth metals from theirs, in 1808. Hans Christian Ørsted's discovery of the magnetic effect of electric currents in 1820 was immediately recognized as an epoch-making advance, although he left further work on electromagnetism to others. André-Marie Ampère quickly repeated Ørsted's experiment, and formulated them mathematically. In 1821, Estonian-German physicist Thomas Johann Seebeck demonstrated the electrical potential between the juncture points of two dissimilar metals when there is a temperature difference between the joints. In 1827, the German scientist Georg Ohm expressed his law in this famous book "Die galvanische Kette, mathematisch bearbeitet" (The Galvanic Circuit Investigated Mathematically) in which he gave his complete theory of electricity. In 1832, Michael Faraday's experiments led him to state his two laws of electrochemistry. In 1836, John Daniell invented a primary cell which solved the problem of polarization by introducing copper ions into the solution near the positive electrode and thus eliminating hydrogen gas generation. Later results revealed that at the other electrode, amalgamated zinc (i.e., zinc alloyed with mercury) would produce a higher voltage. William Grove produced the first fuel cell in 1839. In 1846, Wilhelm Weber developed the electrodynamometer. In 1868, Georges Leclanché patented a new cell which eventually became the forerunner to the world's first widely used battery, the zinc–carbon cell. Svante Arrhenius published his thesis in 1884 on Recherches sur la conductibilité galvanique des électrolytes (Investigations on the galvanic conductivity of electrolytes). From his results the author concluded that electrolytes, when dissolved in water, become to varying degrees split or dissociated into electrically opposite positive and negative ions. In 1886, Paul Héroult and Charles M. Hall developed an efficient method (the Hall–Héroult process) to obtain aluminium using electrolysis of molten alumina. In 1894, Friedrich Ostwald concluded important studies of the conductivity and electrolytic dissociation of organic acids. Walther Hermann Nernst developed the theory of the electromotive force of the voltaic cell in 1888. In 1889, he showed how the characteristics of the voltage produced could be used to calculate the free energy change in the chemical reaction producing the voltage. He constructed an equation, known as Nernst equation, which related the voltage of a cell to its properties. In 1898, Fritz Haber showed that definite reduction products can result from electrolytic processes if the potential at the cathode is kept constant. In 1898, he explained the reduction of nitrobenzene in stages at the cathode and this became the model for other similar reduction processes. In 1902, The Electrochemical Society (ECS) was founded. In 1909, Robert Andrews Millikan began a series of experiments (see oil drop experiment) to determine the electric charge carried by a single electron. In 1911, Harvey Fletcher, working with Millikan, was successful in measuring the charge on the electron, by replacing the water droplets used by Millikan, which quickly evaporated, with oil droplets. Within one day Fletcher measured the charge of an electron within several decimal places. In 1923, Johannes Nicolaus Brønsted and Martin Lowry published essentially the same theory about how acids and bases behave, using an electrochemical basis. In 1937, Arne Tiselius developed the first sophisticated electrophoretic apparatus. Some years later, he was awarded the 1948 Nobel Prize for his work in protein electrophoresis. A year later, in 1949, the International Society of Electrochemistry (ISE) was founded. By the 1960s–1970s quantum electrochemistry was developed by Revaz Dogonadze and his students. The term "redox" stands for reduction-oxidation. It refers to electrochemical processes involving electron transfer to or from a molecule or ion, changing its oxidation state. This reaction can occur through the application of an external voltage or through the release of chemical energy. Oxidation and reduction describe the change of oxidation state that takes place in the atoms, ions or molecules involved in an electrochemical reaction. Formally, oxidation state is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic. An atom or ion that gives up an electron to another atom or ion has its oxidation state increase, and the recipient of the negatively charged electron has its oxidation state decrease. For example, when atomic sodium reacts with atomic chlorine, sodium donates one electron and attains an oxidation state of +1. Chlorine accepts the electron and its oxidation state is reduced to −1. The sign of the oxidation state (positive/negative) actually corresponds to the value of each ion's electronic charge. The attraction of the differently charged sodium and chlorine ions is the reason they then form an ionic bond. The loss of electrons from an atom or molecule is called oxidation, and the gain of electrons is reduction. This can be easily remembered through the use of mnemonic devices. Two of the most popular are "OIL RIG" (Oxidation Is Loss, Reduction Is Gain) and "LEO" the lion says "GER" (Lose Electrons: Oxidation, Gain Electrons: Reduction). Oxidation and reduction always occur in a paired fashion such that one species is oxidized when another is reduced. For cases where electrons are shared (covalent bonds) between atoms with large differences in electronegativity, the electron is assigned to the atom with the largest electronegativity in determining the oxidation state. The atom or molecule which loses electrons is known as the reducing agent, or reductant, and the substance which accepts the electrons is called the oxidizing agent, or oxidant. Thus, the oxidizing agent is always being reduced in a reaction; the reducing agent is always being oxidized. Oxygen is a common oxidizing agent, but not the only one. Despite the name, an oxidation reaction does not necessarily need to involve oxygen. In fact, a fire can be fed by an oxidant other than oxygen; fluorine fires are often unquenchable, as fluorine is an even stronger oxidant (it has a weaker bond and higher electronegativity, and thus accepts electrons even better) than oxygen. For reactions involving oxygen, the gain of oxygen implies the oxidation of the atom or molecule to which the oxygen is added (and the oxygen is reduced). In organic compounds, such as butane or ethanol, the loss of hydrogen implies oxidation of the molecule from which it is lost (and the hydrogen is reduced). This follows because the hydrogen donates its electron in covalent bonds with non-metals but it takes the electron along when it is lost. Conversely, loss of oxygen or gain of hydrogen implies reduction. Electrochemical reactions in water are better analyzed by using the ion-electron method, where H, OH ion, H2O and electrons (to compensate the oxidation changes) are added to the cell's half-reactions for oxidation and reduction. In acidic medium, H ions and water are added to balance each half-reaction. For example, when manganese reacts with sodium bismuthate. Finally, the reaction is balanced by multiplying the stoichiometric coefficients so the numbers of electrons in both half reactions match and adding the resulting half reactions to give the balanced reaction: In basic medium, OH ions and water are added to balance each half-reaction. For example, in a reaction between potassium permanganate and sodium sulfite: Here, 'spectator ions' (K, Na) were omitted from the half-reactions. By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match: the balanced overall reaction is obtained: The same procedure as used in acidic medium can be applied, for example, to balance the complete combustion of propane: By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match: the balanced equation is obtained: An electrochemical cell is a device that produces an electric current from energy released by a spontaneous redox reaction. This kind of cell includes the Galvanic cell or Voltaic cell, named after Luigi Galvani and Alessandro Volta, both scientists who conducted experiments on chemical reactions and electric current during the late 18th century. Electrochemical cells have two conductive electrodes (the anode and the cathode). The anode is defined as the electrode where oxidation occurs and the cathode is the electrode where the reduction takes place. Electrodes can be made from any sufficiently conductive materials, such as metals, semiconductors, graphite, and even conductive polymers. In between these electrodes is the electrolyte, which contains ions that can freely move. The galvanic cell uses two different metal electrodes, each in an electrolyte where the positively charged ions are the oxidized form of the electrode metal. One electrode will undergo oxidation (the anode) and the other will undergo reduction (the cathode). The metal of the anode will oxidize, going from an oxidation state of 0 (in the solid form) to a positive oxidation state and become an ion. At the cathode, the metal ion in solution will accept one or more electrons from the cathode and the ion's oxidation state is reduced to 0. This forms a solid metal that electrodeposits on the cathode. The two electrodes must be electrically connected to each other, allowing for a flow of electrons that leave the metal of the anode and flow through this connection to the ions at the surface of the cathode. This flow of electrons is an electric current that can be used to do work, such as turn a motor or power a light. A galvanic cell whose electrodes are zinc and copper submerged in zinc sulfate and copper sulfate, respectively, is known as a Daniell cell. The half reactions in a Daniell cell are as follows: In this example, the anode is the zinc metal which is oxidized (loses electrons) to form zinc ions in solution, and copper ions accept electrons from the copper metal electrode and the ions deposit at the copper cathode as an electrodeposit. This cell forms a simple battery as it will spontaneously generate a flow of electric current from the anode to the cathode through the external connection. This reaction can be driven in reverse by applying a voltage, resulting in the deposition of zinc metal at the anode and formation of copper ions at the cathode. To provide a complete electric circuit, there must also be an ionic conduction path between the anode and cathode electrolytes in addition to the electron conduction path. The simplest ionic conduction path is to provide a liquid junction. To avoid mixing between the two electrolytes, the liquid junction can be provided through a porous plug that allows ion flow while minimizing electrolyte mixing. To further minimize mixing of the electrolytes, a salt bridge can be used which consists of an electrolyte saturated gel in an inverted U-tube. As the negatively charged electrons flow in one direction around this circuit, the positively charged metal ions flow in the opposite direction in the electrolyte. A voltmeter is capable of measuring the change of electrical potential between the anode and the cathode. The electrochemical cell voltage is also referred to as electromotive force or emf. A cell diagram can be used to trace the path of the electrons in the electrochemical cell. For example, here is a cell diagram of a Daniell cell: First, the reduced form of the metal to be oxidized at the anode (Zn) is written. This is separated from its oxidized form by a vertical line, which represents the limit between the phases (oxidation changes). The double vertical lines represent the saline bridge on the cell. Finally, the oxidized form of the metal to be reduced at the cathode, is written, separated from its reduced form by the vertical line. The electrolyte concentration is given as it is an important variable in determining the exact cell potential. To allow prediction of the cell potential, tabulations of standard electrode potential are available. Such tabulations are referenced to the standard hydrogen electrode (SHE). The standard hydrogen electrode undergoes the reaction which is shown as a reduction but, in fact, the SHE can act as either the anode or the cathode, depending on the relative oxidation/reduction potential of the other electrode/electrolyte combination. The term standard in SHE requires a supply of hydrogen gas bubbled through the electrolyte at a pressure of 1 atm and an acidic electrolyte with H activity equal to 1 (usually assumed to be [H] = 1 mol/liter, i.e. pH = 0). The SHE electrode can be connected to any other electrode by a salt bridge and an external circuit to form a cell. If the second electrode is also at standard conditions, then the measured cell potential is called the standard electrode potential for the electrode. The standard electrode potential for the SHE is zero, by definition. The polarity of the standard electrode potential provides information about the relative reduction potential of the electrode compared to the SHE. If the electrode has a positive potential with respect to the SHE, then that means it is a strongly reducing electrode which forces the SHE to be the anode (an example is Cu in aqueous CuSO4 with a standard electrode potential of 0.337 V). Conversely, if the measured potential is negative, the electrode is more oxidizing than the SHE (such as Zn in ZnSO4 where the standard electrode potential is −0.76 V). Standard electrode potentials are usually tabulated as reduction potentials. However, the reactions are reversible and the role of a particular electrode in a cell depends on the relative oxidation/reduction potential of both electrodes. The oxidation potential for a particular electrode is just the negative of the reduction potential. A standard cell potential can be determined by looking up the standard electrode potentials for both electrodes (sometimes called half cell potentials). The one that is smaller will be the anode and will undergo oxidation. The cell potential is then calculated as the sum of the reduction potential for the cathode and the oxidation potential for the anode. For example, the standard electrode potential for a copper electrode is: Cell diagram At standard temperature, pressure and concentration conditions, the cell's emf (measured by a multimeter) is 0.34 V. By definition, the electrode potential for the SHE is zero. Thus, the Cu is the cathode and the SHE is the anode giving Or, Changes in the stoichiometric coefficients of a balanced cell equation will not change the E°red value because the standard electrode potential is an intensive property. During operation of an electrochemical cell, chemical energy is transformed into electrical energy. This can be expressed mathematically as the product of the cell's emf Ecell measured in volts (V) and the electric charge Qele,trans transferred through the external circuit. Qele,trans is the cell current integrated over time and measured in coulombs (C); it can also be determined by multiplying the total number ne of electrons transferred (measured in moles) times Faraday's constant (F). The emf of the cell at zero current is the maximum possible emf. It can be used to calculate the maximum possible electrical energy that could be obtained from a chemical reaction. This energy is referred to as electrical work and is expressed by the following equation: where work is defined as positive when it increases the energy of the system. Since the free energy is the maximum amount of work that can be extracted from a system, one can write: A positive cell potential gives a negative change in Gibbs free energy. This is consistent with the cell production of an electric current from the cathode to the anode through the external circuit. If the current is driven in the opposite direction by imposing an external potential, then work is done on the cell to drive electrolysis. A spontaneous electrochemical reaction (change in Gibbs free energy less than zero) can be used to generate an electric current in electrochemical cells. This is the basis of all batteries and fuel cells. For example, gaseous oxygen (O2) and hydrogen (H2) can be combined in a fuel cell to form water and energy, typically a combination of heat and electrical energy. Conversely, non-spontaneous electrochemical reactions can be driven forward by the application of a current at sufficient voltage. The electrolysis of water into gaseous oxygen and hydrogen is a typical example. The relation between the equilibrium constant, K, and the Gibbs free energy for an electrochemical cell is expressed as follows: Rearranging to express the relation between standard potential and equilibrium constant yields At T = 298 K, the previous equation can be rewritten using the Briggsian logarithm as follows: The standard potential of an electrochemical cell requires standard conditions (ΔG°) for all of the reactants. When reactant concentrations differ from standard conditions, the cell potential will deviate from the standard potential. In the 20th century German chemist Walther Nernst proposed a mathematical model to determine the effect of reactant concentration on electrochemical cell potential. In the late 19th century, Josiah Willard Gibbs had formulated a theory to predict whether a chemical reaction is spontaneous based on the free energy Here ΔG is change in Gibbs free energy, ΔG° is the cell potential when Q is equal to 1, T is absolute temperature (Kelvin), R is the gas constant and Q is the reaction quotient, which can be calculated by dividing concentrations of products by those of reactants, each raised to the power of its stoichiometric coefficient, using only those products and reactants that are aqueous or gaseous. Gibbs' key contribution was to formalize the understanding of the effect of reactant concentration on spontaneity. Based on Gibbs' work, Nernst extended the theory to include the contribution from electric potential on charged species. As shown in the previous section, the change in Gibbs free energy for an electrochemical cell can be related to the cell potential. Thus, Gibbs' theory becomes Here ne is the number of electrons (in moles), F is the Faraday constant (in coulombs/mole), and ΔE is the cell potential (in volts). Finally, Nernst divided through by the amount of charge transferred to arrive at a new equation which now bears his name: Assuming standard conditions (T = 298 K or 25 °C) and R = 8.3145 J/(K·mol), the equation above can be expressed on base—10 logarithm as shown below: Note that RT/F is also known as the thermal voltage VT and is found in the study of plasmas and semiconductors as well. The value 0.05916 V in the above equation is just the thermal voltage at standard temperature multiplied by the natural logarithm of 10. A concentration cell is an electrochemical cell where the two electrodes are the same material, the electrolytes on the two half-cells involve the same ions, but the electrolyte concentration differs between the two half-cells. An example is an electrochemical cell, where two copper electrodes are submerged in two copper(II) sulfate solutions, whose concentrations are 0.05 M and 2.0 M, connected through a salt bridge. This type of cell will generate a potential that can be predicted by the Nernst equation. Both can undergo the same chemistry (although the reaction proceeds in reverse at the anode) Le Chatelier's principle indicates that the reaction is more favorable to reduction as the concentration of Cu ions increases. Reduction will take place in the cell's compartment where the concentration is higher and oxidation will occur on the more dilute side. The following cell diagram describes the concentration cell mentioned above: where the half cell reactions for oxidation and reduction are: The cell's emf is calculated through the Nernst equation as follows: The value of E° in this kind of cell is zero, as electrodes and ions are the same in both half-cells. After replacing values from the case mentioned, it is possible to calculate cell's potential: or by: However, this value is only approximate, as reaction quotient is defined in terms of ion activities which can be approximated with the concentrations as calculated here. The Nernst equation plays an important role in understanding electrical effects in cells and organelles. Such effects include nerve synapses and cardiac beat as well as the resting potential of a somatic cell. Many types of battery have been commercialized and represent an important practical application of electrochemistry. Early wet cells powered the first telegraph and telephone systems, and were the source of current for electroplating. The zinc-manganese dioxide dry cell was the first portable, non-spillable battery type that made flashlights and other portable devices practical. The mercury battery using zinc and mercuric oxide provided higher levels of power and capacity than the original dry cell for early electronic devices, but has been phased out of common use due to the danger of mercury pollution from discarded cells. The lead–acid battery was the first practical secondary (rechargeable) battery that could have its capacity replenished from an external source. The electrochemical reaction that produced current was (to a useful degree) reversible, allowing electrical energy and chemical energy to be interchanged as needed. Common lead acid batteries contain a mixture of sulfuric acid and water, as well as lead plates. The most common mixture used today is 30% acid. One problem, however, is if left uncharged acid will crystallize within the lead plates of the battery rendering it useless. These batteries last an average of 3 years with daily use but it is not unheard of for a lead acid battery to still be functional after 7–10 years. Lead-acid cells continue to be widely used in automobiles. All the preceding types have water-based electrolytes, which limits the maximum voltage per cell. The freezing of water limits low temperature performance. The lithium metal battery, which does not (and cannot) use water in the electrolyte, provides improved performance over other types; a rechargeable lithium-ion battery is an essential part of many mobile devices. The flow battery, an experimental type, offers the option of vastly larger energy capacity because its reactants can be replenished from external reservoirs. The fuel cell can turn the chemical energy bound in hydrocarbon gases or hydrogen and oxygen directly into electrical energy with a much higher efficiency than any combustion process; such devices have powered many spacecraft and are being applied to grid energy storage for the public power system. Corrosion is an electrochemical process, which reveals itself as rust or tarnish on metals like iron or copper and their respective alloys, steel and brass. For iron rust to occur the metal has to be in contact with oxygen and water. The chemical reactions for this process are relatively complex and not all of them are completely understood. It is believed the causes are the following: Electron transfer (reduction-oxidation) Iron corrosion takes place in an acid medium; H ions come from reaction between carbon dioxide in the atmosphere and water, forming carbonic acid. Fe ions oxidize further, following this equation: Iron(III) oxide hydrate is known as rust. The concentration of water associated with iron oxide varies, thus the chemical formula is represented by Fe2O3·xH2O. An electric circuit is formed as passage of electrons and ions occurs; thus if an electrolyte is present it will facilitate oxidation, explaining why rusting is quicker in salt water. Coinage metals, such as copper and silver, slowly corrode through use. A patina of green-blue copper carbonate forms on the surface of copper with exposure to the water and carbon dioxide in the air. Silver coins or cutlery that are exposed to high sulfur foods such as eggs or the low levels of sulfur species in the air develop a layer of black silver sulfide. Gold and platinum are extremely difficult to oxidize under normal circumstances, and require exposure to a powerful chemical oxidizing agent such as aqua regia. Some common metals oxidize extremely rapidly in air. Titanium and aluminium oxidize instantaneously in contact with the oxygen in the air. These metals form an extremely thin layer of oxidized metal on the surface, which bonds with the underlying metal. This thin oxide layer protects the underlying bulk of the metal from the air preventing the entire metal from oxidizing. These metals are used in applications where corrosion resistance is important. Iron, in contrast, has an oxide that forms in air and water, called rust, that does not bond with the iron and therefore does not stop the further oxidation of the iron. Thus iron left exposed to air and water will continue to rust until all of the iron is oxidized. Attempts to save a metal from becoming anodic are of two general types. Anodic regions dissolve and destroy the structural integrity of the metal. While it is almost impossible to prevent anode/cathode formation, if a non-conducting material covers the metal, contact with the electrolyte is not possible and corrosion will not occur. Metals can be coated with paint or other less conductive metals (passivation). This prevents the metal surface from being exposed to electrolytes. Scratches exposing the metal substrate will result in corrosion. The region under the coating adjacent to the scratch acts as the anode of the reaction. See Anodizing A method commonly used to protect a structural metal is to attach a metal which is more anodic than the metal to be protected. This forces the structural metal to be cathodic, thus spared corrosion. It is called "sacrificial" because the anode dissolves and has to be replaced periodically. Zinc bars are attached to various locations on steel ship hulls to render the ship hull cathodic. The zinc bars are replaced periodically. Other metals, such as magnesium, would work very well but zinc is the least expensive useful metal. To protect pipelines, an ingot of buried or exposed magnesium (or zinc) is buried beside the pipeline and is connected electrically to the pipe above ground. The pipeline is forced to be a cathode and is protected from being oxidized and rusting. The magnesium anode is sacrificed. At intervals new ingots are buried to replace those dissolved. The spontaneous redox reactions of a conventional battery produce electricity through the different reduction potentials of the cathode and anode in the electrolyte. However, electrolysis requires an external source of electrical energy to induce a chemical reaction, and this process takes place in a compartment called an electrolytic cell. When molten, the salt sodium chloride can be electrolyzed to yield metallic sodium and gaseous chlorine. Industrially this process takes place in a special cell named Down's cell. The cell is connected to an electrical power supply, allowing electrons to migrate from the power supply to the electrolytic cell. Reactions that take place in a Down's cell are the following: This process can yield large amounts of metallic sodium and gaseous chlorine, and is widely used in mineral dressing and metallurgy industries. The emf for this process is approximately −4 V indicating a (very) non-spontaneous process. In order for this reaction to occur the power supply should provide at least a potential difference of 4 V. However, larger voltages must be used for this reaction to occur at a high rate. Water can be converted to its component elemental gases, H2 and O2, through the application of an external voltage. Water does not decompose into hydrogen and oxygen spontaneously as the Gibbs free energy change for the process at standard conditions is very positive, about 474.4 kJ. The decomposition of water into hydrogen and oxygen can be performed in an electrolytic cell. In it, a pair of inert electrodes usually made of platinum immersed in water act as anode and cathode in the electrolytic process. The electrolysis starts with the application of an external voltage between the electrodes. This process will not occur except at extremely high voltages without an electrolyte such as sodium chloride or sulfuric acid (most used 0.1 M). Bubbles from the gases will be seen near both electrodes. The following half reactions describe the process mentioned above: Although strong acids may be used in the apparatus, the reaction will not net consume the acid. While this reaction will work at any conductive electrode at a sufficiently large potential, platinum catalyzes both hydrogen and oxygen formation, allowing for relatively low voltages (~2 V depending on the pH). Electrolysis in an aqueous solution is a similar process as mentioned in electrolysis of water. However, it is considered to be a complex process because the contents in solution have to be analyzed in half reactions, whether reduced or oxidized. The presence of water in a solution of sodium chloride must be examined in respect to its reduction and oxidation in both electrodes. Usually, water is electrolysed as mentioned above in electrolysis of water yielding gaseous oxygen in the anode and gaseous hydrogen in the cathode. On the other hand, sodium chloride in water dissociates in Na and Cl ions. The cation, which is the positive ion, will be attracted to the cathode (−), thus reducing the sodium ion. The chloride anion will then be attracted to the anode (+), where it is oxidized to chlorine gas. The following half reactions should be considered in the process mentioned: Reaction 1 is discarded as it has the most negative value on standard reduction potential thus making it less thermodynamically favorable in the process. When comparing the reduction potentials in reactions 2 and 4, the reduction of chloride ion is favored. Thus, if the Cl ion is favored for reduction, then the water reaction is favored for oxidation producing gaseous oxygen, however experiments show gaseous chlorine is produced and not oxygen. Although the initial analysis is correct, there is another effect, known as the overvoltage effect. Additional voltage is sometimes required, beyond the voltage predicted by the E°cell. This may be due to kinetic rather than thermodynamic considerations. In fact, it has been proven that the activation energy for the chloride ion is very low, hence favorable in kinetic terms. In other words, although the voltage applied is thermodynamically sufficient to drive electrolysis, the rate is so slow that to make the process proceed in a reasonable time frame, the voltage of the external source has to be increased (hence, overvoltage). Finally, reaction 3 is favorable because it describes the proliferation of OH ions thus letting a probable reduction of H ions less favorable an option. The overall reaction for the process according to the analysis is the following: As the overall reaction indicates, the concentration of chloride ions is reduced in comparison to OH ions (whose concentration increases). The reaction also shows the production of gaseous hydrogen, chlorine and aqueous sodium hydroxide. Quantitative aspects of electrolysis were originally developed by Michael Faraday in 1834. Faraday is also credited to have coined the terms electrolyte, electrolysis, among many others while he studied quantitative analysis of electrochemical reactions. Also he was an advocate of the law of conservation of energy. Faraday concluded after several experiments on electric current in a non-spontaneous process that the mass of the products yielded on the electrodes was proportional to the value of current supplied to the cell, the length of time the current existed, and the molar mass of the substance analyzed. In other words, the amount of a substance deposited on each electrode of an electrolytic cell is directly proportional to the quantity of electricity passed through the cell. Below is a simplified equation of Faraday's first law: where Faraday devised the laws of chemical electrodeposition of metals from solutions in 1857. He formulated the second law of electrolysis stating "the amounts of bodies which are equivalent to each other in their ordinary chemical action have equal quantities of electricity naturally associated with them." In other words, the quantities of different elements deposited by a given amount of electricity are in the ratio of their chemical equivalent weights. An important aspect of the second law of electrolysis is electroplating, which together with the first law of electrolysis has a significant number of applications in industry, as when used to protectively coat metals to avoid corrosion. There are various important electrochemical processes in both nature and industry, like the coating of objects with metals or metal oxides through electrodeposition, the addition (electroplating) or removal (electropolishing) of thin layers of metal from an object's surface, and the detection of alcohol in drunk drivers through the redox reaction of ethanol. The generation of chemical energy through photosynthesis is inherently an electrochemical process, as is production of metals like aluminum and titanium from their ores. Certain diabetes blood sugar meters measure the amount of glucose in the blood through its redox potential. In addition to established electrochemical technologies (like deep cycle lead acid batteries) there is also a wide range of new emerging technologies such as fuel cells, large format lithium-ion batteries, electrochemical reactors and super-capacitors that are becoming increasingly commercial. Electrochemical or coulometric titrations were introduced for quantitative analysis of minute quantities in 1938 by the Hungarian chemists László Szebellédy and Zoltan Somogyi. Electrochemistry also has important applications in the food industry, like the assessment of food/package interactions, the analysis of milk composition, the characterization and the determination of the freezing end-point of ice-cream mixes, or the determination of free acidity in olive oil.
[ { "paragraph_id": 0, "text": "Electrochemistry is the branch of physical chemistry concerned with the relationship between electrical potential difference and identifiable chemical change. These reactions involve electrons moving via an electronically-conducting phase (typically an external electrical circuit, but not necessarily, as in electroless plating) between electrodes separated by an ionically conducting and electronically insulating electrolyte (or ionic species in a solution).", "title": "" }, { "paragraph_id": 1, "text": "When a chemical reaction is driven by an electrical potential difference, as in electrolysis, or if a potential difference results from a chemical reaction as in an electric battery or fuel cell, it is called an electrochemical reaction. Unlike in other chemical reactions, in electrochemical reactions electrons are not transferred directly between atoms, ions, or molecules, but via the aforementioned electronically-conducting circuit. This phenomenon is what distinguishes an electrochemical reaction from a conventional chemical reaction.", "title": "" }, { "paragraph_id": 2, "text": "Understanding of electrical matters began in the sixteenth century. During this century, the English scientist William Gilbert spent 17 years experimenting with magnetism and, to a lesser extent, electricity. For his work on magnets, Gilbert became known as the \"Father of Magnetism.\" He discovered various methods for producing and strengthening magnets.", "title": "History" }, { "paragraph_id": 3, "text": "In 1663, the German physicist Otto von Guericke created the first electric generator, which produced static electricity by applying friction in the machine. The generator was made of a large sulfur ball cast inside a glass globe, mounted on a shaft. The ball was rotated by means of a crank and an electric spark was produced when a pad was rubbed against the ball as it rotated. The globe could be removed and used as source for experiments with electricity.", "title": "History" }, { "paragraph_id": 4, "text": "By the mid-18th century the French chemist Charles François de Cisternay du Fay had discovered two types of static electricity, and that like charges repel each other whilst unlike charges attract. Du Fay announced that electricity consisted of two fluids: \"vitreous\" (from the Latin for \"glass\"), or positive, electricity; and \"resinous,\" or negative, electricity. This was the two-fluid theory of electricity, which was to be opposed by Benjamin Franklin's one-fluid theory later in the century.", "title": "History" }, { "paragraph_id": 5, "text": "In 1785, Charles-Augustin de Coulomb developed the law of electrostatic attraction as an outgrowth of his attempt to investigate the law of electrical repulsions as stated by Joseph Priestley in England.", "title": "History" }, { "paragraph_id": 6, "text": "In the late 18th century the Italian physician and anatomist Luigi Galvani marked the birth of electrochemistry by establishing a bridge between chemical reactions and electricity on his essay \"De Viribus Electricitatis in Motu Musculari Commentarius\" (Latin for Commentary on the Effect of Electricity on Muscular Motion) in 1791 where he proposed a \"nerveo-electrical substance\" on biological life forms.", "title": "History" }, { "paragraph_id": 7, "text": "In his essay Galvani concluded that animal tissue contained a here-to-fore neglected innate, vital force, which he termed \"animal electricity,\" which activated nerves and muscles spanned by metal probes. He believed that this new force was a form of electricity in addition to the \"natural\" form produced by lightning or by the electric eel and torpedo ray as well as the \"artificial\" form produced by friction (i.e., static electricity).", "title": "History" }, { "paragraph_id": 8, "text": "Galvani's scientific colleagues generally accepted his views, but Alessandro Volta rejected the idea of an \"animal electric fluid,\" replying that the frog's legs responded to differences in metal temper, composition, and bulk. Galvani refuted this by obtaining muscular action with two pieces of the same material. Nevertheless, Volta's experimentation led him to develop the first practical battery, which took advantage of the relatively high energy (weak bonding) of zinc and could deliver an electrical current for much longer than any other device known at the time.", "title": "History" }, { "paragraph_id": 9, "text": "In 1800, William Nicholson and Johann Wilhelm Ritter succeeded in decomposing water into hydrogen and oxygen by electrolysis using Volta's battery. Soon thereafter Ritter discovered the process of electroplating. He also observed that the amount of metal deposited and the amount of oxygen produced during an electrolytic process depended on the distance between the electrodes. By 1801, Ritter observed thermoelectric currents and anticipated the discovery of thermoelectricity by Thomas Johann Seebeck.", "title": "History" }, { "paragraph_id": 10, "text": "By the 1810s, William Hyde Wollaston made improvements to the galvanic cell. Sir Humphry Davy's work with electrolysis led to the conclusion that the production of electricity in simple electrolytic cells resulted from chemical action and that chemical combination occurred between substances of opposite charge. This work led directly to the isolation of metallic sodium and potassium by electrolysis of their molten salts, and of the alkaline earth metals from theirs, in 1808.", "title": "History" }, { "paragraph_id": 11, "text": "Hans Christian Ørsted's discovery of the magnetic effect of electric currents in 1820 was immediately recognized as an epoch-making advance, although he left further work on electromagnetism to others. André-Marie Ampère quickly repeated Ørsted's experiment, and formulated them mathematically.", "title": "History" }, { "paragraph_id": 12, "text": "In 1821, Estonian-German physicist Thomas Johann Seebeck demonstrated the electrical potential between the juncture points of two dissimilar metals when there is a temperature difference between the joints.", "title": "History" }, { "paragraph_id": 13, "text": "In 1827, the German scientist Georg Ohm expressed his law in this famous book \"Die galvanische Kette, mathematisch bearbeitet\" (The Galvanic Circuit Investigated Mathematically) in which he gave his complete theory of electricity.", "title": "History" }, { "paragraph_id": 14, "text": "In 1832, Michael Faraday's experiments led him to state his two laws of electrochemistry. In 1836, John Daniell invented a primary cell which solved the problem of polarization by introducing copper ions into the solution near the positive electrode and thus eliminating hydrogen gas generation. Later results revealed that at the other electrode, amalgamated zinc (i.e., zinc alloyed with mercury) would produce a higher voltage.", "title": "History" }, { "paragraph_id": 15, "text": "William Grove produced the first fuel cell in 1839. In 1846, Wilhelm Weber developed the electrodynamometer. In 1868, Georges Leclanché patented a new cell which eventually became the forerunner to the world's first widely used battery, the zinc–carbon cell.", "title": "History" }, { "paragraph_id": 16, "text": "Svante Arrhenius published his thesis in 1884 on Recherches sur la conductibilité galvanique des électrolytes (Investigations on the galvanic conductivity of electrolytes). From his results the author concluded that electrolytes, when dissolved in water, become to varying degrees split or dissociated into electrically opposite positive and negative ions.", "title": "History" }, { "paragraph_id": 17, "text": "In 1886, Paul Héroult and Charles M. Hall developed an efficient method (the Hall–Héroult process) to obtain aluminium using electrolysis of molten alumina.", "title": "History" }, { "paragraph_id": 18, "text": "In 1894, Friedrich Ostwald concluded important studies of the conductivity and electrolytic dissociation of organic acids.", "title": "History" }, { "paragraph_id": 19, "text": "Walther Hermann Nernst developed the theory of the electromotive force of the voltaic cell in 1888. In 1889, he showed how the characteristics of the voltage produced could be used to calculate the free energy change in the chemical reaction producing the voltage. He constructed an equation, known as Nernst equation, which related the voltage of a cell to its properties.", "title": "History" }, { "paragraph_id": 20, "text": "In 1898, Fritz Haber showed that definite reduction products can result from electrolytic processes if the potential at the cathode is kept constant. In 1898, he explained the reduction of nitrobenzene in stages at the cathode and this became the model for other similar reduction processes.", "title": "History" }, { "paragraph_id": 21, "text": "In 1902, The Electrochemical Society (ECS) was founded.", "title": "History" }, { "paragraph_id": 22, "text": "In 1909, Robert Andrews Millikan began a series of experiments (see oil drop experiment) to determine the electric charge carried by a single electron. In 1911, Harvey Fletcher, working with Millikan, was successful in measuring the charge on the electron, by replacing the water droplets used by Millikan, which quickly evaporated, with oil droplets. Within one day Fletcher measured the charge of an electron within several decimal places.", "title": "History" }, { "paragraph_id": 23, "text": "In 1923, Johannes Nicolaus Brønsted and Martin Lowry published essentially the same theory about how acids and bases behave, using an electrochemical basis.", "title": "History" }, { "paragraph_id": 24, "text": "In 1937, Arne Tiselius developed the first sophisticated electrophoretic apparatus. Some years later, he was awarded the 1948 Nobel Prize for his work in protein electrophoresis.", "title": "History" }, { "paragraph_id": 25, "text": "A year later, in 1949, the International Society of Electrochemistry (ISE) was founded.", "title": "History" }, { "paragraph_id": 26, "text": "By the 1960s–1970s quantum electrochemistry was developed by Revaz Dogonadze and his students.", "title": "History" }, { "paragraph_id": 27, "text": "The term \"redox\" stands for reduction-oxidation. It refers to electrochemical processes involving electron transfer to or from a molecule or ion, changing its oxidation state. This reaction can occur through the application of an external voltage or through the release of chemical energy. Oxidation and reduction describe the change of oxidation state that takes place in the atoms, ions or molecules involved in an electrochemical reaction. Formally, oxidation state is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic. An atom or ion that gives up an electron to another atom or ion has its oxidation state increase, and the recipient of the negatively charged electron has its oxidation state decrease.", "title": "Principles" }, { "paragraph_id": 28, "text": "For example, when atomic sodium reacts with atomic chlorine, sodium donates one electron and attains an oxidation state of +1. Chlorine accepts the electron and its oxidation state is reduced to −1. The sign of the oxidation state (positive/negative) actually corresponds to the value of each ion's electronic charge. The attraction of the differently charged sodium and chlorine ions is the reason they then form an ionic bond.", "title": "Principles" }, { "paragraph_id": 29, "text": "The loss of electrons from an atom or molecule is called oxidation, and the gain of electrons is reduction. This can be easily remembered through the use of mnemonic devices. Two of the most popular are \"OIL RIG\" (Oxidation Is Loss, Reduction Is Gain) and \"LEO\" the lion says \"GER\" (Lose Electrons: Oxidation, Gain Electrons: Reduction). Oxidation and reduction always occur in a paired fashion such that one species is oxidized when another is reduced. For cases where electrons are shared (covalent bonds) between atoms with large differences in electronegativity, the electron is assigned to the atom with the largest electronegativity in determining the oxidation state.", "title": "Principles" }, { "paragraph_id": 30, "text": "The atom or molecule which loses electrons is known as the reducing agent, or reductant, and the substance which accepts the electrons is called the oxidizing agent, or oxidant. Thus, the oxidizing agent is always being reduced in a reaction; the reducing agent is always being oxidized. Oxygen is a common oxidizing agent, but not the only one. Despite the name, an oxidation reaction does not necessarily need to involve oxygen. In fact, a fire can be fed by an oxidant other than oxygen; fluorine fires are often unquenchable, as fluorine is an even stronger oxidant (it has a weaker bond and higher electronegativity, and thus accepts electrons even better) than oxygen.", "title": "Principles" }, { "paragraph_id": 31, "text": "For reactions involving oxygen, the gain of oxygen implies the oxidation of the atom or molecule to which the oxygen is added (and the oxygen is reduced). In organic compounds, such as butane or ethanol, the loss of hydrogen implies oxidation of the molecule from which it is lost (and the hydrogen is reduced). This follows because the hydrogen donates its electron in covalent bonds with non-metals but it takes the electron along when it is lost. Conversely, loss of oxygen or gain of hydrogen implies reduction.", "title": "Principles" }, { "paragraph_id": 32, "text": "Electrochemical reactions in water are better analyzed by using the ion-electron method, where H, OH ion, H2O and electrons (to compensate the oxidation changes) are added to the cell's half-reactions for oxidation and reduction.", "title": "Principles" }, { "paragraph_id": 33, "text": "In acidic medium, H ions and water are added to balance each half-reaction. For example, when manganese reacts with sodium bismuthate.", "title": "Principles" }, { "paragraph_id": 34, "text": "Finally, the reaction is balanced by multiplying the stoichiometric coefficients so the numbers of electrons in both half reactions match", "title": "Principles" }, { "paragraph_id": 35, "text": "and adding the resulting half reactions to give the balanced reaction:", "title": "Principles" }, { "paragraph_id": 36, "text": "In basic medium, OH ions and water are added to balance each half-reaction. For example, in a reaction between potassium permanganate and sodium sulfite:", "title": "Principles" }, { "paragraph_id": 37, "text": "Here, 'spectator ions' (K, Na) were omitted from the half-reactions. By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match:", "title": "Principles" }, { "paragraph_id": 38, "text": "the balanced overall reaction is obtained:", "title": "Principles" }, { "paragraph_id": 39, "text": "The same procedure as used in acidic medium can be applied, for example, to balance the complete combustion of propane:", "title": "Principles" }, { "paragraph_id": 40, "text": "By multiplying the stoichiometric coefficients so the numbers of electrons in both half reaction match:", "title": "Principles" }, { "paragraph_id": 41, "text": "the balanced equation is obtained:", "title": "Principles" }, { "paragraph_id": 42, "text": "An electrochemical cell is a device that produces an electric current from energy released by a spontaneous redox reaction. This kind of cell includes the Galvanic cell or Voltaic cell, named after Luigi Galvani and Alessandro Volta, both scientists who conducted experiments on chemical reactions and electric current during the late 18th century.", "title": "Electrochemical cells" }, { "paragraph_id": 43, "text": "Electrochemical cells have two conductive electrodes (the anode and the cathode). The anode is defined as the electrode where oxidation occurs and the cathode is the electrode where the reduction takes place. Electrodes can be made from any sufficiently conductive materials, such as metals, semiconductors, graphite, and even conductive polymers. In between these electrodes is the electrolyte, which contains ions that can freely move.", "title": "Electrochemical cells" }, { "paragraph_id": 44, "text": "The galvanic cell uses two different metal electrodes, each in an electrolyte where the positively charged ions are the oxidized form of the electrode metal. One electrode will undergo oxidation (the anode) and the other will undergo reduction (the cathode). The metal of the anode will oxidize, going from an oxidation state of 0 (in the solid form) to a positive oxidation state and become an ion. At the cathode, the metal ion in solution will accept one or more electrons from the cathode and the ion's oxidation state is reduced to 0. This forms a solid metal that electrodeposits on the cathode. The two electrodes must be electrically connected to each other, allowing for a flow of electrons that leave the metal of the anode and flow through this connection to the ions at the surface of the cathode. This flow of electrons is an electric current that can be used to do work, such as turn a motor or power a light.", "title": "Electrochemical cells" }, { "paragraph_id": 45, "text": "A galvanic cell whose electrodes are zinc and copper submerged in zinc sulfate and copper sulfate, respectively, is known as a Daniell cell.", "title": "Electrochemical cells" }, { "paragraph_id": 46, "text": "The half reactions in a Daniell cell are as follows:", "title": "Electrochemical cells" }, { "paragraph_id": 47, "text": "In this example, the anode is the zinc metal which is oxidized (loses electrons) to form zinc ions in solution, and copper ions accept electrons from the copper metal electrode and the ions deposit at the copper cathode as an electrodeposit. This cell forms a simple battery as it will spontaneously generate a flow of electric current from the anode to the cathode through the external connection. This reaction can be driven in reverse by applying a voltage, resulting in the deposition of zinc metal at the anode and formation of copper ions at the cathode.", "title": "Electrochemical cells" }, { "paragraph_id": 48, "text": "To provide a complete electric circuit, there must also be an ionic conduction path between the anode and cathode electrolytes in addition to the electron conduction path. The simplest ionic conduction path is to provide a liquid junction. To avoid mixing between the two electrolytes, the liquid junction can be provided through a porous plug that allows ion flow while minimizing electrolyte mixing. To further minimize mixing of the electrolytes, a salt bridge can be used which consists of an electrolyte saturated gel in an inverted U-tube. As the negatively charged electrons flow in one direction around this circuit, the positively charged metal ions flow in the opposite direction in the electrolyte.", "title": "Electrochemical cells" }, { "paragraph_id": 49, "text": "A voltmeter is capable of measuring the change of electrical potential between the anode and the cathode.", "title": "Electrochemical cells" }, { "paragraph_id": 50, "text": "The electrochemical cell voltage is also referred to as electromotive force or emf.", "title": "Electrochemical cells" }, { "paragraph_id": 51, "text": "A cell diagram can be used to trace the path of the electrons in the electrochemical cell. For example, here is a cell diagram of a Daniell cell:", "title": "Electrochemical cells" }, { "paragraph_id": 52, "text": "First, the reduced form of the metal to be oxidized at the anode (Zn) is written. This is separated from its oxidized form by a vertical line, which represents the limit between the phases (oxidation changes). The double vertical lines represent the saline bridge on the cell. Finally, the oxidized form of the metal to be reduced at the cathode, is written, separated from its reduced form by the vertical line. The electrolyte concentration is given as it is an important variable in determining the exact cell potential.", "title": "Electrochemical cells" }, { "paragraph_id": 53, "text": "To allow prediction of the cell potential, tabulations of standard electrode potential are available. Such tabulations are referenced to the standard hydrogen electrode (SHE). The standard hydrogen electrode undergoes the reaction", "title": "Standard electrode potential" }, { "paragraph_id": 54, "text": "which is shown as a reduction but, in fact, the SHE can act as either the anode or the cathode, depending on the relative oxidation/reduction potential of the other electrode/electrolyte combination. The term standard in SHE requires a supply of hydrogen gas bubbled through the electrolyte at a pressure of 1 atm and an acidic electrolyte with H activity equal to 1 (usually assumed to be [H] = 1 mol/liter, i.e. pH = 0).", "title": "Standard electrode potential" }, { "paragraph_id": 55, "text": "The SHE electrode can be connected to any other electrode by a salt bridge and an external circuit to form a cell. If the second electrode is also at standard conditions, then the measured cell potential is called the standard electrode potential for the electrode. The standard electrode potential for the SHE is zero, by definition. The polarity of the standard electrode potential provides information about the relative reduction potential of the electrode compared to the SHE. If the electrode has a positive potential with respect to the SHE, then that means it is a strongly reducing electrode which forces the SHE to be the anode (an example is Cu in aqueous CuSO4 with a standard electrode potential of 0.337 V). Conversely, if the measured potential is negative, the electrode is more oxidizing than the SHE (such as Zn in ZnSO4 where the standard electrode potential is −0.76 V).", "title": "Standard electrode potential" }, { "paragraph_id": 56, "text": "Standard electrode potentials are usually tabulated as reduction potentials. However, the reactions are reversible and the role of a particular electrode in a cell depends on the relative oxidation/reduction potential of both electrodes. The oxidation potential for a particular electrode is just the negative of the reduction potential. A standard cell potential can be determined by looking up the standard electrode potentials for both electrodes (sometimes called half cell potentials). The one that is smaller will be the anode and will undergo oxidation. The cell potential is then calculated as the sum of the reduction potential for the cathode and the oxidation potential for the anode.", "title": "Standard electrode potential" }, { "paragraph_id": 57, "text": "For example, the standard electrode potential for a copper electrode is:", "title": "Standard electrode potential" }, { "paragraph_id": 58, "text": "Cell diagram", "title": "Standard electrode potential" }, { "paragraph_id": 59, "text": "At standard temperature, pressure and concentration conditions, the cell's emf (measured by a multimeter) is 0.34 V. By definition, the electrode potential for the SHE is zero. Thus, the Cu is the cathode and the SHE is the anode giving", "title": "Standard electrode potential" }, { "paragraph_id": 60, "text": "Or,", "title": "Standard electrode potential" }, { "paragraph_id": 61, "text": "Changes in the stoichiometric coefficients of a balanced cell equation will not change the E°red value because the standard electrode potential is an intensive property.", "title": "Standard electrode potential" }, { "paragraph_id": 62, "text": "During operation of an electrochemical cell, chemical energy is transformed into electrical energy. This can be expressed mathematically as the product of the cell's emf Ecell measured in volts (V) and the electric charge Qele,trans transferred through the external circuit.", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 63, "text": "Qele,trans is the cell current integrated over time and measured in coulombs (C); it can also be determined by multiplying the total number ne of electrons transferred (measured in moles) times Faraday's constant (F).", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 64, "text": "The emf of the cell at zero current is the maximum possible emf. It can be used to calculate the maximum possible electrical energy that could be obtained from a chemical reaction. This energy is referred to as electrical work and is expressed by the following equation:", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 65, "text": "where work is defined as positive when it increases the energy of the system.", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 66, "text": "Since the free energy is the maximum amount of work that can be extracted from a system, one can write:", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 67, "text": "A positive cell potential gives a negative change in Gibbs free energy. This is consistent with the cell production of an electric current from the cathode to the anode through the external circuit. If the current is driven in the opposite direction by imposing an external potential, then work is done on the cell to drive electrolysis.", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 68, "text": "A spontaneous electrochemical reaction (change in Gibbs free energy less than zero) can be used to generate an electric current in electrochemical cells. This is the basis of all batteries and fuel cells. For example, gaseous oxygen (O2) and hydrogen (H2) can be combined in a fuel cell to form water and energy, typically a combination of heat and electrical energy.", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 69, "text": "Conversely, non-spontaneous electrochemical reactions can be driven forward by the application of a current at sufficient voltage. The electrolysis of water into gaseous oxygen and hydrogen is a typical example.", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 70, "text": "The relation between the equilibrium constant, K, and the Gibbs free energy for an electrochemical cell is expressed as follows:", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 71, "text": "Rearranging to express the relation between standard potential and equilibrium constant yields", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 72, "text": "At T = 298 K, the previous equation can be rewritten using the Briggsian logarithm as follows:", "title": "Spontaneity of redox reaction" }, { "paragraph_id": 73, "text": "The standard potential of an electrochemical cell requires standard conditions (ΔG°) for all of the reactants. When reactant concentrations differ from standard conditions, the cell potential will deviate from the standard potential. In the 20th century German chemist Walther Nernst proposed a mathematical model to determine the effect of reactant concentration on electrochemical cell potential.", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 74, "text": "In the late 19th century, Josiah Willard Gibbs had formulated a theory to predict whether a chemical reaction is spontaneous based on the free energy", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 75, "text": "Here ΔG is change in Gibbs free energy, ΔG° is the cell potential when Q is equal to 1, T is absolute temperature (Kelvin), R is the gas constant and Q is the reaction quotient, which can be calculated by dividing concentrations of products by those of reactants, each raised to the power of its stoichiometric coefficient, using only those products and reactants that are aqueous or gaseous.", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 76, "text": "Gibbs' key contribution was to formalize the understanding of the effect of reactant concentration on spontaneity.", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 77, "text": "Based on Gibbs' work, Nernst extended the theory to include the contribution from electric potential on charged species. As shown in the previous section, the change in Gibbs free energy for an electrochemical cell can be related to the cell potential. Thus, Gibbs' theory becomes", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 78, "text": "Here ne is the number of electrons (in moles), F is the Faraday constant (in coulombs/mole), and ΔE is the cell potential (in volts).", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 79, "text": "Finally, Nernst divided through by the amount of charge transferred to arrive at a new equation which now bears his name:", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 80, "text": "Assuming standard conditions (T = 298 K or 25 °C) and R = 8.3145 J/(K·mol), the equation above can be expressed on base—10 logarithm as shown below:", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 81, "text": "Note that RT/F is also known as the thermal voltage VT and is found in the study of plasmas and semiconductors as well. The value 0.05916 V in the above equation is just the thermal voltage at standard temperature multiplied by the natural logarithm of 10.", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 82, "text": "A concentration cell is an electrochemical cell where the two electrodes are the same material, the electrolytes on the two half-cells involve the same ions, but the electrolyte concentration differs between the two half-cells.", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 83, "text": "An example is an electrochemical cell, where two copper electrodes are submerged in two copper(II) sulfate solutions, whose concentrations are 0.05 M and 2.0 M, connected through a salt bridge. This type of cell will generate a potential that can be predicted by the Nernst equation. Both can undergo the same chemistry (although the reaction proceeds in reverse at the anode)", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 84, "text": "Le Chatelier's principle indicates that the reaction is more favorable to reduction as the concentration of Cu ions increases. Reduction will take place in the cell's compartment where the concentration is higher and oxidation will occur on the more dilute side.", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 85, "text": "The following cell diagram describes the concentration cell mentioned above:", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 86, "text": "where the half cell reactions for oxidation and reduction are:", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 87, "text": "The cell's emf is calculated through the Nernst equation as follows:", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 88, "text": "The value of E° in this kind of cell is zero, as electrodes and ions are the same in both half-cells.", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 89, "text": "After replacing values from the case mentioned, it is possible to calculate cell's potential:", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 90, "text": "or by:", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 91, "text": "However, this value is only approximate, as reaction quotient is defined in terms of ion activities which can be approximated with the concentrations as calculated here.", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 92, "text": "The Nernst equation plays an important role in understanding electrical effects in cells and organelles. Such effects include nerve synapses and cardiac beat as well as the resting potential of a somatic cell.", "title": "Cell emf dependency on changes in concentration" }, { "paragraph_id": 93, "text": "Many types of battery have been commercialized and represent an important practical application of electrochemistry. Early wet cells powered the first telegraph and telephone systems, and were the source of current for electroplating. The zinc-manganese dioxide dry cell was the first portable, non-spillable battery type that made flashlights and other portable devices practical. The mercury battery using zinc and mercuric oxide provided higher levels of power and capacity than the original dry cell for early electronic devices, but has been phased out of common use due to the danger of mercury pollution from discarded cells.", "title": "Battery" }, { "paragraph_id": 94, "text": "The lead–acid battery was the first practical secondary (rechargeable) battery that could have its capacity replenished from an external source. The electrochemical reaction that produced current was (to a useful degree) reversible, allowing electrical energy and chemical energy to be interchanged as needed. Common lead acid batteries contain a mixture of sulfuric acid and water, as well as lead plates. The most common mixture used today is 30% acid. One problem, however, is if left uncharged acid will crystallize within the lead plates of the battery rendering it useless. These batteries last an average of 3 years with daily use but it is not unheard of for a lead acid battery to still be functional after 7–10 years. Lead-acid cells continue to be widely used in automobiles.", "title": "Battery" }, { "paragraph_id": 95, "text": "All the preceding types have water-based electrolytes, which limits the maximum voltage per cell. The freezing of water limits low temperature performance. The lithium metal battery, which does not (and cannot) use water in the electrolyte, provides improved performance over other types; a rechargeable lithium-ion battery is an essential part of many mobile devices.", "title": "Battery" }, { "paragraph_id": 96, "text": "The flow battery, an experimental type, offers the option of vastly larger energy capacity because its reactants can be replenished from external reservoirs. The fuel cell can turn the chemical energy bound in hydrocarbon gases or hydrogen and oxygen directly into electrical energy with a much higher efficiency than any combustion process; such devices have powered many spacecraft and are being applied to grid energy storage for the public power system.", "title": "Battery" }, { "paragraph_id": 97, "text": "Corrosion is an electrochemical process, which reveals itself as rust or tarnish on metals like iron or copper and their respective alloys, steel and brass.", "title": "Corrosion" }, { "paragraph_id": 98, "text": "For iron rust to occur the metal has to be in contact with oxygen and water. The chemical reactions for this process are relatively complex and not all of them are completely understood. It is believed the causes are the following: Electron transfer (reduction-oxidation)", "title": "Corrosion" }, { "paragraph_id": 99, "text": "Iron corrosion takes place in an acid medium; H ions come from reaction between carbon dioxide in the atmosphere and water, forming carbonic acid. Fe ions oxidize further, following this equation:", "title": "Corrosion" }, { "paragraph_id": 100, "text": "Iron(III) oxide hydrate is known as rust. The concentration of water associated with iron oxide varies, thus the chemical formula is represented by Fe2O3·xH2O.", "title": "Corrosion" }, { "paragraph_id": 101, "text": "An electric circuit is formed as passage of electrons and ions occurs; thus if an electrolyte is present it will facilitate oxidation, explaining why rusting is quicker in salt water.", "title": "Corrosion" }, { "paragraph_id": 102, "text": "Coinage metals, such as copper and silver, slowly corrode through use. A patina of green-blue copper carbonate forms on the surface of copper with exposure to the water and carbon dioxide in the air. Silver coins or cutlery that are exposed to high sulfur foods such as eggs or the low levels of sulfur species in the air develop a layer of black silver sulfide.", "title": "Corrosion" }, { "paragraph_id": 103, "text": "Gold and platinum are extremely difficult to oxidize under normal circumstances, and require exposure to a powerful chemical oxidizing agent such as aqua regia.", "title": "Corrosion" }, { "paragraph_id": 104, "text": "Some common metals oxidize extremely rapidly in air. Titanium and aluminium oxidize instantaneously in contact with the oxygen in the air. These metals form an extremely thin layer of oxidized metal on the surface, which bonds with the underlying metal. This thin oxide layer protects the underlying bulk of the metal from the air preventing the entire metal from oxidizing. These metals are used in applications where corrosion resistance is important. Iron, in contrast, has an oxide that forms in air and water, called rust, that does not bond with the iron and therefore does not stop the further oxidation of the iron. Thus iron left exposed to air and water will continue to rust until all of the iron is oxidized.", "title": "Corrosion" }, { "paragraph_id": 105, "text": "Attempts to save a metal from becoming anodic are of two general types. Anodic regions dissolve and destroy the structural integrity of the metal.", "title": "Corrosion" }, { "paragraph_id": 106, "text": "While it is almost impossible to prevent anode/cathode formation, if a non-conducting material covers the metal, contact with the electrolyte is not possible and corrosion will not occur.", "title": "Corrosion" }, { "paragraph_id": 107, "text": "Metals can be coated with paint or other less conductive metals (passivation). This prevents the metal surface from being exposed to electrolytes. Scratches exposing the metal substrate will result in corrosion. The region under the coating adjacent to the scratch acts as the anode of the reaction.", "title": "Corrosion" }, { "paragraph_id": 108, "text": "See Anodizing", "title": "Corrosion" }, { "paragraph_id": 109, "text": "A method commonly used to protect a structural metal is to attach a metal which is more anodic than the metal to be protected. This forces the structural metal to be cathodic, thus spared corrosion. It is called \"sacrificial\" because the anode dissolves and has to be replaced periodically.", "title": "Corrosion" }, { "paragraph_id": 110, "text": "Zinc bars are attached to various locations on steel ship hulls to render the ship hull cathodic. The zinc bars are replaced periodically. Other metals, such as magnesium, would work very well but zinc is the least expensive useful metal.", "title": "Corrosion" }, { "paragraph_id": 111, "text": "To protect pipelines, an ingot of buried or exposed magnesium (or zinc) is buried beside the pipeline and is connected electrically to the pipe above ground. The pipeline is forced to be a cathode and is protected from being oxidized and rusting. The magnesium anode is sacrificed. At intervals new ingots are buried to replace those dissolved.", "title": "Corrosion" }, { "paragraph_id": 112, "text": "The spontaneous redox reactions of a conventional battery produce electricity through the different reduction potentials of the cathode and anode in the electrolyte. However, electrolysis requires an external source of electrical energy to induce a chemical reaction, and this process takes place in a compartment called an electrolytic cell.", "title": "Electrolysis" }, { "paragraph_id": 113, "text": "When molten, the salt sodium chloride can be electrolyzed to yield metallic sodium and gaseous chlorine. Industrially this process takes place in a special cell named Down's cell. The cell is connected to an electrical power supply, allowing electrons to migrate from the power supply to the electrolytic cell.", "title": "Electrolysis" }, { "paragraph_id": 114, "text": "Reactions that take place in a Down's cell are the following:", "title": "Electrolysis" }, { "paragraph_id": 115, "text": "This process can yield large amounts of metallic sodium and gaseous chlorine, and is widely used in mineral dressing and metallurgy industries.", "title": "Electrolysis" }, { "paragraph_id": 116, "text": "The emf for this process is approximately −4 V indicating a (very) non-spontaneous process. In order for this reaction to occur the power supply should provide at least a potential difference of 4 V. However, larger voltages must be used for this reaction to occur at a high rate.", "title": "Electrolysis" }, { "paragraph_id": 117, "text": "Water can be converted to its component elemental gases, H2 and O2, through the application of an external voltage. Water does not decompose into hydrogen and oxygen spontaneously as the Gibbs free energy change for the process at standard conditions is very positive, about 474.4 kJ. The decomposition of water into hydrogen and oxygen can be performed in an electrolytic cell. In it, a pair of inert electrodes usually made of platinum immersed in water act as anode and cathode in the electrolytic process. The electrolysis starts with the application of an external voltage between the electrodes. This process will not occur except at extremely high voltages without an electrolyte such as sodium chloride or sulfuric acid (most used 0.1 M).", "title": "Electrolysis" }, { "paragraph_id": 118, "text": "Bubbles from the gases will be seen near both electrodes. The following half reactions describe the process mentioned above:", "title": "Electrolysis" }, { "paragraph_id": 119, "text": "Although strong acids may be used in the apparatus, the reaction will not net consume the acid. While this reaction will work at any conductive electrode at a sufficiently large potential, platinum catalyzes both hydrogen and oxygen formation, allowing for relatively low voltages (~2 V depending on the pH).", "title": "Electrolysis" }, { "paragraph_id": 120, "text": "Electrolysis in an aqueous solution is a similar process as mentioned in electrolysis of water. However, it is considered to be a complex process because the contents in solution have to be analyzed in half reactions, whether reduced or oxidized.", "title": "Electrolysis" }, { "paragraph_id": 121, "text": "The presence of water in a solution of sodium chloride must be examined in respect to its reduction and oxidation in both electrodes. Usually, water is electrolysed as mentioned above in electrolysis of water yielding gaseous oxygen in the anode and gaseous hydrogen in the cathode. On the other hand, sodium chloride in water dissociates in Na and Cl ions. The cation, which is the positive ion, will be attracted to the cathode (−), thus reducing the sodium ion. The chloride anion will then be attracted to the anode (+), where it is oxidized to chlorine gas.", "title": "Electrolysis" }, { "paragraph_id": 122, "text": "The following half reactions should be considered in the process mentioned:", "title": "Electrolysis" }, { "paragraph_id": 123, "text": "Reaction 1 is discarded as it has the most negative value on standard reduction potential thus making it less thermodynamically favorable in the process.", "title": "Electrolysis" }, { "paragraph_id": 124, "text": "When comparing the reduction potentials in reactions 2 and 4, the reduction of chloride ion is favored. Thus, if the Cl ion is favored for reduction, then the water reaction is favored for oxidation producing gaseous oxygen, however experiments show gaseous chlorine is produced and not oxygen.", "title": "Electrolysis" }, { "paragraph_id": 125, "text": "Although the initial analysis is correct, there is another effect, known as the overvoltage effect. Additional voltage is sometimes required, beyond the voltage predicted by the E°cell. This may be due to kinetic rather than thermodynamic considerations. In fact, it has been proven that the activation energy for the chloride ion is very low, hence favorable in kinetic terms. In other words, although the voltage applied is thermodynamically sufficient to drive electrolysis, the rate is so slow that to make the process proceed in a reasonable time frame, the voltage of the external source has to be increased (hence, overvoltage).", "title": "Electrolysis" }, { "paragraph_id": 126, "text": "Finally, reaction 3 is favorable because it describes the proliferation of OH ions thus letting a probable reduction of H ions less favorable an option.", "title": "Electrolysis" }, { "paragraph_id": 127, "text": "The overall reaction for the process according to the analysis is the following:", "title": "Electrolysis" }, { "paragraph_id": 128, "text": "As the overall reaction indicates, the concentration of chloride ions is reduced in comparison to OH ions (whose concentration increases). The reaction also shows the production of gaseous hydrogen, chlorine and aqueous sodium hydroxide.", "title": "Electrolysis" }, { "paragraph_id": 129, "text": "Quantitative aspects of electrolysis were originally developed by Michael Faraday in 1834. Faraday is also credited to have coined the terms electrolyte, electrolysis, among many others while he studied quantitative analysis of electrochemical reactions. Also he was an advocate of the law of conservation of energy.", "title": "Electrolysis" }, { "paragraph_id": 130, "text": "Faraday concluded after several experiments on electric current in a non-spontaneous process that the mass of the products yielded on the electrodes was proportional to the value of current supplied to the cell, the length of time the current existed, and the molar mass of the substance analyzed. In other words, the amount of a substance deposited on each electrode of an electrolytic cell is directly proportional to the quantity of electricity passed through the cell.", "title": "Electrolysis" }, { "paragraph_id": 131, "text": "Below is a simplified equation of Faraday's first law:", "title": "Electrolysis" }, { "paragraph_id": 132, "text": "where", "title": "Electrolysis" }, { "paragraph_id": 133, "text": "Faraday devised the laws of chemical electrodeposition of metals from solutions in 1857. He formulated the second law of electrolysis stating \"the amounts of bodies which are equivalent to each other in their ordinary chemical action have equal quantities of electricity naturally associated with them.\" In other words, the quantities of different elements deposited by a given amount of electricity are in the ratio of their chemical equivalent weights.", "title": "Electrolysis" }, { "paragraph_id": 134, "text": "An important aspect of the second law of electrolysis is electroplating, which together with the first law of electrolysis has a significant number of applications in industry, as when used to protectively coat metals to avoid corrosion.", "title": "Electrolysis" }, { "paragraph_id": 135, "text": "There are various important electrochemical processes in both nature and industry, like the coating of objects with metals or metal oxides through electrodeposition, the addition (electroplating) or removal (electropolishing) of thin layers of metal from an object's surface, and the detection of alcohol in drunk drivers through the redox reaction of ethanol. The generation of chemical energy through photosynthesis is inherently an electrochemical process, as is production of metals like aluminum and titanium from their ores. Certain diabetes blood sugar meters measure the amount of glucose in the blood through its redox potential. In addition to established electrochemical technologies (like deep cycle lead acid batteries) there is also a wide range of new emerging technologies such as fuel cells, large format lithium-ion batteries, electrochemical reactors and super-capacitors that are becoming increasingly commercial. Electrochemical or coulometric titrations were introduced for quantitative analysis of minute quantities in 1938 by the Hungarian chemists László Szebellédy and Zoltan Somogyi. Electrochemistry also has important applications in the food industry, like the assessment of food/package interactions, the analysis of milk composition, the characterization and the determination of the freezing end-point of ice-cream mixes, or the determination of free acidity in olive oil.", "title": "Applications" }, { "paragraph_id": 136, "text": "", "title": "External links" } ]
Electrochemistry is the branch of physical chemistry concerned with the relationship between electrical potential difference and identifiable chemical change. These reactions involve electrons moving via an electronically-conducting phase between electrodes separated by an ionically conducting and electronically insulating electrolyte. When a chemical reaction is driven by an electrical potential difference, as in electrolysis, or if a potential difference results from a chemical reaction as in an electric battery or fuel cell, it is called an electrochemical reaction. Unlike in other chemical reactions, in electrochemical reactions electrons are not transferred directly between atoms, ions, or molecules, but via the aforementioned electronically-conducting circuit. This phenomenon is what distinguishes an electrochemical reaction from a conventional chemical reaction.
2001-10-26T03:46:20Z
2023-12-03T05:42:51Z
[ "Template:Short description", "Template:Portal", "Template:Webarchive", "Template:Curlie", "Template:Chem", "Template:Mvar", "Template:Pad", "Template:Div col", "Template:Div col end", "Template:Reflist", "Template:ISBN", "Template:Main article", "Template:Commons category-inline", "Template:Prone to spam", "Template:Analytical chemistry", "Template:Authority control", "Template:Abbr", "Template:Sfrac", "Template:Cite book", "Template:Cite web", "Template:Cite journal", "Template:BranchesofChemistry", "Template:Use dmy dates" ]
https://en.wikipedia.org/wiki/Electrochemistry
9,602
Edinburgh
Edinburgh (/ˈɛdɪnbərə/ Scots: [ˈɛdɪnbʌrə]; Scottish Gaelic: Dùn Èideann [ˌt̪un ˈeːtʲən̪ˠ]) is the capital city of Scotland and one of its 32 council areas. The city is located in south-east Scotland, and is bounded to the north by the Firth of Forth estuary and to the south by the Pentland Hills. Edinburgh had a population of 506,520 in mid-2020, making it the second-most populous city in Scotland and the seventh-most populous in the United Kingdom. Recognised as the capital of Scotland since at least the 15th century, Edinburgh is the seat of the Scottish Government, the Scottish Parliament, the highest courts in Scotland, and the Palace of Holyroodhouse, the official residence of the British monarch in Scotland. It is also the annual venue of the General Assembly of the Church of Scotland. The city has long been a centre of education, particularly in the fields of medicine, Scottish law, literature, philosophy, the sciences and engineering. The University of Edinburgh, founded in 1582 and now one of three in the city, is considered one of the best research institutions in the world. It is the second-largest financial centre in the United Kingdom, the fourth largest in Europe, and the thirteenth largest internationally. The city is a cultural centre, and is the home of institutions including the National Museum of Scotland, the National Library of Scotland and the Scottish National Gallery. The city is also known for the Edinburgh International Festival and the Fringe, the latter being the world's largest annual international arts festival. Historic sites in Edinburgh include Edinburgh Castle, the Palace of Holyroodhouse, the churches of St. Giles, Greyfriars and the Canongate, and the extensive Georgian New Town built in the 18th/19th centuries. Edinburgh's Old Town and New Town together are listed as a UNESCO World Heritage Site, which has been managed by Edinburgh World Heritage since 1999. The city's historical and cultural attractions have made it the UK's second-most visited tourist destination, attracting 4.9 million visits, including 2.4 million from overseas in 2018. Edinburgh is governed by the City of Edinburgh Council, a unitary authority. The City of Edinburgh council area had an estimated population of 526,470 in mid-2021, and includes outlying towns and villages which are not part of Edinburgh proper. The city is in the Lothian region and was historically part of the shire of Midlothian (also called Edinburghshire). "Edin", the root of the city's name, derives from Eidyn, the name for the region in Cumbric, the Brittonic Celtic language formerly spoken there. The name's meaning is unknown. The district of Eidyn was centred on the stronghold of Din Eidyn, the dun or hillfort of Eidyn. This stronghold is believed to have been located at Castle Rock, now the site of Edinburgh Castle. A siege of Din Eidyn by Oswald, king of the Angles of Northumbria in 638 marked the beginning of three centuries of Germanic influence in south east Scotland that laid the foundations for the development of Scots, before the town was ultimately subsumed in 954 by the kingdom known to the English as Scotland. As the language shifted from Cumbric to Northumbrian Old English and then Scots, the Brittonic din in Din Eidyn was replaced by burh, producing Edinburgh. In Scottish Gaelic din becomes dùn, producing modern Dùn Èideann. The city is affectionately nicknamed Auld Reekie, Scots for Old Smoky, for the views from the country of the smoke-covered Old Town. In Walter Scott's 1820 novel The Abbot, a character observes that "yonder stands Auld Reekie—you may see the smoke hover over her at twenty miles' distance". In 1898, Thomas Carlyle comments on the phenomenon: "Smoke cloud hangs over old Edinburgh, for, ever since Aeneas Silvius's time and earlier, the people have the art, very strange to Aeneas, of burning a certain sort of black stones, and Edinburgh with its chimneys is called 'Auld Reekie' by the country people". 19th-century historian Robert Chambers argued that the sobriquet could not be traced before the reign of Charles II in the late 17th century. Instead, he attributed the name to a Fife laird, Durham of Largo, who regulated the bedtime of his children by the smoke rising above Edinburgh from the fires of the tenements. "It's time now bairns, to tak' the beuks, and gang to our beds, for yonder's Auld Reekie, I see, putting on her nicht -cap!". Edinburgh has been popularly called the Athens of the North since the early 19th century. References to Athens, such as Athens of Britain and Modern Athens, had been made as early as the 1760s. The similarities were seen to be topographical but also intellectual. Edinburgh's Castle Rock reminded returning grand tourists of the Athenian Acropolis, as did aspects of the neoclassical architecture and layout of New Town. Both cities had flatter, fertile agricultural land sloping down to a port several miles away (respectively, Leith and Piraeus). Intellectually, the Scottish Enlightenment, with its humanist and rationalist outlook, was influenced by Ancient Greek philosophy. In 1822, artist Hugh William Williams organized an exhibition that showed his paintings of Athens alongside views of Edinburgh, and the idea of a direct parallel between both cities quickly caught the popular imagination. When plans were drawn up in the early 19th century to architecturally develop Calton Hill, the design of the National Monument directly copied Athens' Parthenon. Tom Stoppard's character Archie of Jumpers said, perhaps playing on Reykjavík meaning "smoky bay", that the "Reykjavík of the South" would be more appropriate. The city has also been known by several Latin names, such as Edinburgum, while the adjectival forms Edinburgensis and Edinensis are used in educational and scientific contexts. Edina is a late 18th-century poetical form used by the Scots poets Robert Fergusson and Robert Burns. "Embra" or "Embro" are colloquialisms from the same time, as in Robert Garioch's Embro to the Ploy. Ben Jonson described it as "Britaine's other eye", and Sir Walter Scott referred to it as "yon Empress of the North". Robert Louis Stevenson, also a son of the city, wrote that Edinburgh "is what Paris ought to be". The earliest known human habitation in the Edinburgh area was at Cramond, where evidence was found of a Mesolithic camp site dated to c. 8500 BC. Traces of later Bronze Age and Iron Age settlements have been found on Castle Rock, Arthur's Seat, Craiglockhart Hill and the Pentland Hills. When the Romans arrived in Lothian at the end of the 1st century AD, they found a Brittonic Celtic tribe whose name they recorded as the Votadini. The Votadini transitioned into the Gododdin kingdom in the Early Middle Ages, with Eidyn serving as one of the kingdom's districts. During this period, the Castle Rock site, thought to have been the stronghold of Din Eidyn, emerged as the kingdom's major centre. The medieval poem Y Gododdin describes a war band from across the Brittonic world who gathered in Eidyn before a fateful raid; this may describe a historical event around AD 600. In 638, the Gododdin stronghold was besieged by forces loyal to King Oswald of Northumbria, and around this time control of Lothian passed to the Angles. Their influence continued for the next three centuries until around 950, when, during the reign of Indulf, son of Constantine II, the "burh" (fortress), named in the 10th-century Pictish Chronicle as oppidum Eden, was abandoned to the Scots. It thenceforth remained, for the most part, under their jurisdiction. The royal burgh was founded by King David I in the early 12th century on land belonging to the Crown, though the date of its charter is unknown. The first documentary evidence of the medieval burgh is a royal charter, c. 1124–1127, by King David I granting a toft in burgo meo de Edenesburg to the Priory of Dunfermline. The shire of Edinburgh seems to have also been created in the reign of David I, possibly covering all of Lothian at first, but by 1305 the eastern and western parts of Lothian had become Haddingtonshire and Linlithgowshire, leaving Edinburgh as the county town of a shire covering the central part of Lothian, which was called Edinburghshire or Midlothian (the latter name being an informal, but commonly used, alternative until the county's name was legally changed in 1947). Edinburgh was largely under English control from 1291 to 1314 and from 1333 to 1341, during the Wars of Scottish Independence. When the English invaded Scotland in 1298, Edward I of England chose not to enter Edinburgh but passed by it with his army. In the middle of the 14th century, the French chronicler Jean Froissart described it as the capital of Scotland (c. 1365), and James III (1451–88) referred to it in the 15th century as "the principal burgh of our kingdom". In 1482 James III "granted and perpetually confirmed to the said Provost, Bailies, Clerk, Council, and Community, and their successors, the office of Sheriff within the Burgh for ever, to be exercised by the Provost for the time as Sheriff, and by the Bailies for the time as Sheriffsdepute conjunctly and severally; with full power to hold Courts, to punish transgressors not only by banishment but by death, to appoint officers of Court, and to do everything else appertaining to the office of Sheriff; as also to apply to their own proper use the fines and escheats arising out of the exercise of the said office." Despite being burnt by the English in 1544, Edinburgh continued to develop and grow, and was at the centre of events in the 16th-century Scottish Reformation and 17th-century Wars of the Covenant. In 1582, Edinburgh's town council was given a royal charter by King James VI permitting the establishment of a university; founded as Tounis College (Town's College), the institution developed into the University of Edinburgh, which contributed to Edinburgh's central intellectual role in subsequent centuries. In 1603, King James VI of Scotland succeeded to the English throne, uniting the crowns of Scotland and England in a personal union known as the Union of the Crowns, though Scotland remained, in all other respects, a separate kingdom. In 1638, King Charles I's attempt to introduce Anglican church forms in Scotland encountered stiff Presbyterian opposition culminating in the conflicts of the Wars of the Three Kingdoms. Subsequent Scottish support for Charles Stuart's restoration to the throne of England resulted in Edinburgh's occupation by Oliver Cromwell's Commonwealth of England forces – the New Model Army – in 1650. In the 17th century, Edinburgh's boundaries were still defined by the city's defensive town walls. As a result, the city's growing population was accommodated by increasing the height of the houses. Buildings of 11 storeys or more were common, and have been described as forerunners of the modern-day skyscraper. Most of these old structures were replaced by the predominantly Victorian buildings seen in today's Old Town. In 1611 an act of parliament created the High Constables of Edinburgh to keep order in the city, thought to be the oldest statutory police force in the world. Following the Treaty of Union in 1706, the Parliaments of England and Scotland passed Acts of Union in 1706 and 1707 respectively, uniting the two kingdoms in the Kingdom of Great Britain effective from 1 May 1707. As a consequence, the Parliament of Scotland merged with the Parliament of England to form the Parliament of Great Britain, which sat at Westminster in London. The Union was opposed by many Scots, resulting in riots in the city. By the first half of the 18th century, Edinburgh was described as one of Europe's most densely populated, overcrowded and unsanitary towns. Visitors were struck by the fact that the social classes shared the same urban space, even inhabiting the same tenement buildings; although here a form of social segregation did prevail, whereby shopkeepers and tradesmen tended to occupy the cheaper-to-rent cellars and garrets, while the more well-to-do professional classes occupied the more expensive middle storeys. During the Jacobite rising of 1745, Edinburgh was briefly occupied by the Jacobite "Highland Army" before its march into England. After its eventual defeat at Culloden, there followed a period of reprisals and pacification, largely directed at the rebellious clans. In Edinburgh, the Town Council, keen to emulate London by initiating city improvements and expansion to the north of the castle, reaffirmed its belief in the Union and loyalty to the Hanoverian monarch George III by its choice of names for the streets of the New Town: for example, Rose Street and Thistle Street; and for the royal family, George Street, Queen Street, Hanover Street, Frederick Street and Princes Street (in honour of George's two sons). The consistently geometric layout of the plan for the extension of Edinburgh was the result of a major competition in urban planning staged by the Town Council in 1766. In the second half of the century, the city was at the heart of the Scottish Enlightenment, when thinkers like David Hume, Adam Smith, James Hutton and Joseph Black were familiar figures in its streets. Edinburgh became a major intellectual centre, earning it the nickname "Athens of the North" because of its many neo-classical buildings and reputation for learning, recalling ancient Athens. In the 18th-century novel The Expedition of Humphry Clinker by Tobias Smollett one character describes Edinburgh as a "hotbed of genius". Edinburgh was also a major centre for the Scottish book trade. The highly successful London bookseller Andrew Millar was apprenticed there to James McEuen. From the 1770s onwards, the professional and business classes gradually deserted the Old Town in favour of the more elegant "one-family" residences of the New Town, a migration that changed the city's social character. According to the foremost historian of this development, "Unity of social feeling was one of the most valuable heritages of old Edinburgh, and its disappearance was widely and properly lamented." Despite an enduring myth to the contrary, Edinburgh became an industrial centre with its traditional industries of printing, brewing and distilling continuing to grow in the 19th century and joined by new industries such as rubber works, engineering works and others. By 1821, Edinburgh had been overtaken by Glasgow as Scotland's largest city. The city centre between Princes Street and George Street became a major commercial and shopping district, a development partly stimulated by the arrival of railways in the 1840s. The Old Town became an increasingly dilapidated, overcrowded slum with high mortality rates. Improvements carried out under Lord Provost William Chambers in the 1860s began the transformation of the area into the predominantly Victorian Old Town seen today. More improvements followed in the early 20th century as a result of the work of Patrick Geddes, but relative economic stagnation during the two world wars and beyond saw the Old Town deteriorate further before major slum clearance in the 1960s and 1970s began to reverse the process. University building developments which transformed the George Square and Potterrow areas proved highly controversial. Since the 1990s a new "financial district", including the Edinburgh International Conference Centre, has grown mainly on demolished railway property to the west of the castle, stretching into Fountainbridge, a run-down 19th-century industrial suburb which has undergone radical change since the 1980s with the demise of industrial and brewery premises. This ongoing development has enabled Edinburgh to maintain its place as the United Kingdom's second largest financial and administrative centre after London. Financial services now account for a third of all commercial office space in the city. The development of Edinburgh Park, a new business and technology park covering 38 acres (15 ha), 4 mi (6 km) west of the city centre, has also contributed to the District Council's strategy for the city's major economic regeneration. In 1998, the Scotland Act, which came into force the following year, established a devolved Scottish Parliament and Scottish Executive (renamed the Scottish Government since September 2007). Both based in Edinburgh, they are responsible for governing Scotland while reserved matters such as defence, foreign affairs and some elements of income tax remain the responsibility of the Parliament of the United Kingdom in London. In 2022, Edinburgh was affected by the 2022 Scotland bin strikes. In 2023, Edinburgh became the first capital city in Europe to sign the global Plant Based Treaty, which was introduced at COP26 in 2021 in Glasgow. Green Party councillor Steve Burgess introduced the treaty. The Scottish Countryside Alliance and other farming groups called the treaty "anti-farming." Situated in Scotland's Central Belt, Edinburgh lies on the southern shore of the Firth of Forth. The city centre is 2+1⁄2 mi (4.0 km) southwest of the shoreline of Leith and 26 mi (42 km) inland, as the crow flies, from the east coast of Scotland and the North Sea at Dunbar. While the early burgh grew up near the prominent Castle Rock, the modern city is often said to be built on seven hills, namely Calton Hill, Corstorphine Hill, Craiglockhart Hill, Braid Hill, Blackford Hill, Arthur's Seat and the Castle Rock, giving rise to allusions to the seven hills of Rome. Occupying a narrow gap between the Firth of Forth to the north and the Pentland Hills and their outrunners to the south, the city sprawls over a landscape which is the product of early volcanic activity and later periods of intensive glaciation. Igneous activity between 350 and 400 million years ago, coupled with faulting, led to the creation of tough basalt volcanic plugs, which predominate over much of the area. One such example is the Castle Rock which forced the advancing ice sheet to divide, sheltering the softer rock and forming a 1 mi-long (1.6 km) tail of material to the east, thus creating a distinctive crag and tail formation. Glacial erosion on the north side of the crag gouged a deep valley later filled by the now drained Nor Loch. These features, along with another hollow on the rock's south side, formed an ideal natural strongpoint upon which Edinburgh Castle was built. Similarly, Arthur's Seat is the remains of a volcano dating from the Carboniferous period, which was eroded by a glacier moving west to east during the ice age. Erosive action such as plucking and abrasion exposed the rocky crags to the west before leaving a tail of deposited glacial material swept to the east. This process formed the distinctive Salisbury Crags, a series of teschenite cliffs between Arthur's Seat and the location of the early burgh. The residential areas of Marchmont and Bruntsfield are built along a series of drumlin ridges south of the city centre, which were deposited as the glacier receded. Other prominent landforms such as Calton Hill and Corstorphine Hill are also products of glacial erosion. The Braid Hills and Blackford Hill are a series of small summits to the south of the city centre that command expansive views looking northwards over the urban area to the Firth of Forth. Edinburgh is drained by the river named the Water of Leith, which rises at the Colzium Springs in the Pentland Hills and runs for 18 miles (29 km) through the south and west of the city, emptying into the Firth of Forth at Leith. The nearest the river gets to the city centre is at Dean Village on the north-western edge of the New Town, where a deep gorge is spanned by Thomas Telford's Dean Bridge, built in 1832 for the road to Queensferry. The Water of Leith Walkway is a mixed-use trail that follows the course of the river for 19.6 km (12.2 mi) from Balerno to Leith. Excepting the shoreline of the Firth of Forth, Edinburgh is encircled by a green belt, designated in 1957, which stretches from Dalmeny in the west to Prestongrange in the east. With an average width of 3.2 km (2 mi) the principal objectives of the green belt were to contain the outward expansion of the city and to prevent the agglomeration of urban areas. Expansion affecting the green belt is strictly controlled but developments such as Edinburgh Airport and the Royal Highland Showground at Ingliston lie within the zone. Similarly, suburbs such as Juniper Green and Balerno are situated on green belt land. One feature of the Edinburgh green belt is the inclusion of parcels of land within the city which are designated green belt, even though they do not connect with the peripheral ring. Examples of these independent wedges of green belt include Holyrood Park and Corstorphine Hill. Edinburgh includes former towns and villages that retain much of their original character as settlements in existence before they were absorbed into the expanding city of the nineteenth and twentieth centuries. Many areas, such as Dalry, contain residences that are multi-occupancy buildings known as tenements, although the more southern and western parts of the city have traditionally been less built-up with a greater number of detached and semi-detached villas. The historic centre of Edinburgh is divided in two by the broad green swathe of Princes Street Gardens. To the south, the view is dominated by Edinburgh Castle, built high on Castle Rock, and the long sweep of the Old Town descending towards Holyrood Palace. To the north lie Princes Street and the New Town. The West End includes the financial district, with insurance and banking offices as well as the Edinburgh International Conference Centre. Edinburgh's Old and New Towns were listed as a UNESCO World Heritage Site in 1995 in recognition of the unique character of the Old Town with its medieval street layout and the planned Georgian New Town, including the adjoining Dean Village and Calton Hill areas. There are over 4,500 listed buildings within the city, a higher proportion relative to area than any other city in the United Kingdom. The castle is perched on top of a rocky crag (the remnant of an extinct volcano) and the Royal Mile runs down the crest of a ridge from it terminating at Holyrood Palace. Minor streets (called closes or wynds) lie on either side of the main spine forming a herringbone pattern. Due to space restrictions imposed by the narrowness of this landform, the Old Town became home to some of the earliest "high rise" residential buildings. Multi-storey dwellings known as lands were the norm from the 16th century onwards with ten and eleven storeys being typical and one even reaching fourteen or fifteen storeys. Numerous vaults below street level were inhabited to accommodate the influx of incomers, particularly Irish immigrants, during the Industrial Revolution. The street has several fine public buildings such as St Giles' Cathedral, the City Chambers and the Law Courts. Other places of historical interest nearby are Greyfriars Kirkyard and Mary King's Close. The Grassmarket, running deep below the castle is connected by the steep double terraced Victoria Street. The street layout is typical of the old quarters of many Northern European cities. The New Town was an 18th-century solution to the problem of an increasingly crowded city which had been confined to the ridge sloping down from the castle. In 1766 a competition to design a "New Town" was won by James Craig, a 27-year-old architect. The plan was a rigid, ordered grid, which fitted in well with Enlightenment ideas of rationality. The principal street was to be George Street, running along the natural ridge to the north of what became known as the "Old Town". To either side of it are two other main streets: Princes Street and Queen Street. Princes Street has become Edinburgh's main shopping street and now has few of its Georgian buildings in their original state. The three main streets are connected by a series of streets running perpendicular to them. The east and west ends of George Street are terminated by St Andrew Square and Charlotte Square respectively. The latter, designed by Robert Adam, influenced the architectural style of the New Town into the early 19th century. Bute House, the official residence of the First Minister of Scotland, is on the north side of Charlotte Square. The hollow between the Old and New Towns was formerly the Nor Loch, which was created for the town's defence but came to be used by the inhabitants for dumping their sewage. It was drained by the 1820s as part of the city's northward expansion. Craig's original plan included an ornamental canal on the site of the loch, but this idea was abandoned. Soil excavated while laying the foundations of buildings in the New Town was dumped on the site of the loch to create the slope connecting the Old and New Towns known as The Mound. In the middle of the 19th century the National Gallery of Scotland and Royal Scottish Academy Building were built on The Mound, and tunnels for the railway line between Haymarket and Waverley stations were driven through it. The Southside is a residential part of the city, which includes the districts of St Leonards, Marchmont, Morningside, Newington, Sciennes, the Grange and Blackford. The Southside is broadly analogous to the area covered formerly by the Burgh Muir, and was developed as a residential area after the opening of the South Bridge in the 1780s. The Southside is particularly popular with families (many state and private schools are here), young professionals and students (the central University of Edinburgh campus is based around George Square just north of Marchmont and the Meadows), and Napier University (with major campuses around Merchiston and Morningside). The area is also well provided with hotel and "bed and breakfast" accommodation for visiting festival-goers. These districts often feature in works of fiction. For example, Church Hill in Morningside, was the home of Muriel Spark's Miss Jean Brodie, and Ian Rankin's Inspector Rebus lives in Marchmont and works in St Leonards. Leith was historically the port of Edinburgh, an arrangement of unknown date that was confirmed by the royal charter Robert the Bruce granted to the city in 1329. The port developed a separate identity from Edinburgh, which to some extent it still retains, and it was a matter of great resentment when the two burghs merged in 1920 into the City of Edinburgh. Even today the parliamentary seat is known as "Edinburgh North and Leith". The loss of traditional industries and commerce (the last shipyard closed in 1983) resulted in economic decline. The Edinburgh Waterfront development has transformed old dockland areas from Leith to Granton into residential areas with shopping and leisure facilities and helped rejuvenate the area. With the redevelopment, Edinburgh has gained the business of cruise liner companies which now provide cruises to Norway, Sweden, Denmark, Germany, and the Netherlands. The coastal suburb of Portobello is characterised by Georgian villas, Victorian tenements, a beach and promenade and cafés, bars, restaurants and independent shops. There are rowing and sailing clubs and a restored Victorian swimming pool, including Turkish baths. The urban area of Edinburgh is almost entirely within the City of Edinburgh Council boundary, merging with Musselburgh in East Lothian. Towns within easy reach of the city boundary include Inverkeithing, Haddington, Tranent, Prestonpans, Dalkeith, Bonnyrigg, Loanhead, Penicuik, Broxburn, Livingston and Dunfermline. Edinburgh lies at the heart of the Edinburgh & South East Scotland City region with a population in 2014 of 1,339,380. Like most of Scotland, Edinburgh has a cool, temperate, maritime climate which, despite its northerly latitude, is milder than places which lie at similar latitudes such as Moscow and Labrador. The city's proximity to the sea mitigates any large variations in temperature or extremes of climate. Winter daytime temperatures rarely fall below freezing while summer temperatures are moderate, rarely exceeding 22 °C (72 °F). The highest temperature recorded in the city was 31.6 °C (88.9 °F) on 25 July 2019 at Gogarbank, beating the previous record of 31 °C (88 °F) on 4 August 1975 at Edinburgh Airport. The lowest temperature recorded in recent years was −14.6 °C (5.7 °F) during December 2010 at Gogarbank. Given Edinburgh's position between the coast and hills, it is renowned as "the windy city", with the prevailing wind direction coming from the south-west, which is often associated with warm, unstable air from the North Atlantic Current that can give rise to rainfall – although considerably less than cities to the west, such as Glasgow. Rainfall is distributed fairly evenly throughout the year. Winds from an easterly direction are usually drier but considerably colder, and may be accompanied by haar, a persistent coastal fog. Vigorous Atlantic depressions, known as European windstorms, can affect the city between October and May. Located slightly north of the city centre, the weather station at the Royal Botanic Garden Edinburgh (RBGE) has been an official weather station for the Met Office since 1956. The Met Office operates its own weather station at Gogarbank on the city's western outskirts, near Edinburgh Airport. This slightly inland station has a slightly wider temperature span between seasons, is cloudier and somewhat wetter, but differences are minor. Temperature and rainfall records have been kept at the Royal Observatory since 1764. The most recent official population estimates (2020) are 506,520 for the locality (includes Currie), 530,990 for the Edinburgh settlement (includes Musselburgh). Edinburgh has a high proportion of young adults, with 19.5% of the population in their 20s (exceeded only by Aberdeen) and 15.2% in their 30s which is the highest in Scotland. The proportion of Edinburgh's population born in the UK fell from 92% to 84% between 2001 and 2011, while the proportion of White Scottish-born fell from 78% to 70%. Of those Edinburgh residents born in the UK, 335,000 or 83% were born in Scotland, with 58,000 or 14% being born in England. Some 13,000 people or 2.7% of the city's population are of Polish descent. 39,500 people or 8.2% of Edinburgh's population class themselves as Non-White which is an increase from 4% in 2001. Of the Non-White population, the largest group by far are Asian, totalling 26,264 people. Within the Asian population, people of Chinese descent are now the largest sub-group, with 8,076 people, amounting to about 1.7% of the city's total population. The city's population of Indian descent amounts to 6,470 (1.4% of the total population), while there are some 5,858 of Pakistani descent (1.2% of the total population). Although they account for only 1,277 people or 0.3% of the city's population, Edinburgh has the highest number and proportion of people of Bangladeshi descent in Scotland. Over 7,000 people were born in African countries (1.6% of the total population) and nearly 7,000 in the Americas. With the notable exception of Inner London, Edinburgh has a higher number of people born in the United States (over 3,700) than any other city in the UK. The proportion of people born outside the UK was 15.9% compared with 8% in 2001. A census by the Edinburgh presbytery in 1592 recorded a population of 8,003 adults spread equally north and south of the High Street which runs along the spine of the ridge sloping down from the Castle. In the 18th and 19th centuries, the population expanded rapidly, rising from 49,000 in 1751 to 136,000 in 1831, primarily due to migration from rural areas. As the population grew, problems of overcrowding in the Old Town, particularly in the cramped tenements that lined the present day Royal Mile and the Cowgate, were exacerbated. Poor sanitary arrangements resulted in a high incidence of disease, with outbreaks of cholera occurring in 1832, 1848 and 1866. The construction of the New Town from 1767 onwards witnessed the migration of the professional and business classes from the difficult living conditions in the Old Town to the lower density, higher quality surroundings taking shape on land to the north. Expansion southwards from the Old Town saw more tenements being built in the 19th century, giving rise to Victorian suburbs such as Dalry, Newington, Marchmont and Bruntsfield. Early 20th-century population growth coincided with lower-density suburban development. As the city expanded to the south and west, detached and semi-detached villas with large gardens replaced tenements as the predominant building style. Nonetheless, the 2001 census revealed that over 55% of Edinburgh's population were still living in tenements or blocks of flats, a figure in line with other Scottish cities, but much higher than other British cities, and even central London. From the early to mid 20th century, the growth in population, together with slum clearance in the Old Town and other areas, such as Dumbiedykes, Leith, and Fountainbridge, led to the creation of new estates such as Stenhouse and Saughton, Craigmillar and Niddrie, Pilton and Muirhouse, Piershill, and Sighthill. In 2018, the Church of Scotland had 20,956 members in 71 congregations in the Presbytery of Edinburgh. Its most prominent church is St Giles' on the Royal Mile, first dedicated in 1243 but believed to date from before the 12th century. Saint Giles is historically the patron saint of Edinburgh. St Cuthbert's, situated at the west end of Princes Street Gardens in the shadow of Edinburgh Castle and St Giles' can lay claim to being the oldest Christian sites in the city, though the present St Cuthbert's, designed by Hippolyte Blanc, was dedicated in 1894. Other Church of Scotland churches include Greyfriars Kirk, the Canongate Kirk, St Andrew's and St George's West Church and the Barclay Church. The Church of Scotland Offices are in Edinburgh, as is the Assembly Hall where the annual General Assembly is held. The Roman Catholic Archdiocese of St Andrews and Edinburgh has 27 parishes across the city. The Archbishop of St Andrews and Edinburgh has his official residence in Greenhill, the diocesan offices are in nearby Marchmont, and its cathedral is St Mary's Cathedral, Edinburgh. The Diocese of Edinburgh of the Scottish Episcopal Church has over 50 churches, half of them in the city. Its centre is the late 19th-century Gothic style St Mary's Cathedral in the West End's Palmerston Place. Orthodox Christianity is represented by Pan, Romanian and Russian Orthodox churches. There are several independent churches in the city, both Catholic and Protestant, including Charlotte Chapel, Carrubbers Christian Centre, Bellevue Chapel and Sacred Heart. There are also churches belonging to Quakers, Christadelphians, Seventh-day Adventists, Church of Christ, Scientist, The Church of Jesus Christ of Latter-day Saints (LDS Church) and Elim Pentecostal Church. Muslims have several places of worship across the city. Edinburgh Central Mosque, the largest Islamic place of worship, is located in Potterrow on the city's Southside, near Bristo Square. Construction was largely financed by a gift from King Fahd of Saudi Arabia and was completed in 1998. There is also an Ahmadiyya Muslim community. The first recorded presence of a Jewish community in Edinburgh dates back to the late 18th century. Edinburgh's Orthodox synagogue, opened in 1932, is in Salisbury Road and can accommodate a congregation of 2000. A Liberal Jewish congregation also meets in the city. A Sikh gurdwara and a Hindu mandir are located in Leith. The city also has a Brahma Kumaris centre in the Polwarth area. The Edinburgh Buddhist Centre, run by the Triratna Buddhist Community, formerly situated in Melville Terrace, now runs sessions at the Healthy Life Centre, Bread Street. Other Buddhist traditions are represented by groups which meet in the capital: the Community of Interbeing (followers of Thich Nhat Hanh), Rigpa, Samye Dzong, Theravadin, Pure Land and Shambala. There is a Sōtō Zen Priory in Portobello and a Theravadin Thai Buddhist Monastery in Slateford Road. Edinburgh is home to a Baháʼí community, and a Theosophical Society meets in Great King Street. Edinburgh has an Inter-Faith Association. Edinburgh has over 39 graveyards and cemeteries, many of which are listed and of historical character, including several former church burial grounds. Examples include Old Calton Burial Ground, Greyfriars Kirkyard and Dean Cemetery. Edinburgh has the strongest economy of any city in the United Kingdom outside London and the highest percentage of professionals in the UK with 43% of the population holding a degree-level or professional qualification. According to the Centre for International Competitiveness, it is the most competitive large city in the United Kingdom. It also has the highest gross value added per employee of any city in the UK outside London, measuring £57,594 in 2010. It was named European Best Large City of the Future for Foreign Direct Investment and Best Large City for Foreign Direct Investment Strategy in the Financial Times fDi magazine awards 2012/13. In the 19th century, Edinburgh's economy was known for banking and insurance, publishing and printing, and brewing and distilling. Today, its economy is based mainly on financial services, scientific research, higher education, and tourism. In March 2010, unemployment in Edinburgh was comparatively low at 3.6%, and it remains consistently below the Scottish average of 4.5%. Edinburgh is the second most visited city by foreign visitors in the UK after London. Banking has been a mainstay of the Edinburgh economy for over 300 years, since the Bank of Scotland was established by an act of the Scottish Parliament in 1695. Today, the financial services industry, with its particularly strong insurance and investment sectors, and underpinned by Edinburgh-based firms such as Scottish Widows and Standard Life Aberdeen, accounts for the city being the UK's second financial centre after London and Europe's fourth in terms of equity assets. The NatWest Group (formerly Royal Bank of Scotland Group) opened new global headquarters at Gogarburn in the west of the city in October 2005. The city is home to the headquarters of Bank of Scotland, Sainsbury's Bank, Tesco Bank, and TSB Bank. Tourism is also an important element in the city's economy. As a World Heritage Site, tourists visit historical sites such as Edinburgh Castle, the Palace of Holyroodhouse and the Old and New Towns. Their numbers are augmented in August each year during the Edinburgh Festivals, which attracts 4.4 million visitors, and generates over £100M for the local economy. As the centre of Scotland's government and legal system, the public sector plays a central role in Edinburgh's economy. Many departments of the Scottish Government are in the city. Other major employers include NHS Scotland and local government administration. When the £1.3bn Edinburgh & South East Scotland City Region Deal was signed in 2018, the region's Gross Value Added (GVA) contribution to the Scottish economy was cited as £33bn, or 33% of the country's output. The City Region Deal funds a range of "Data Driven Innovation" hubs which are using data to innovate in the region, recognising the region's strengths in technology and data science, the growing importance of the data economy, and the need to tackle the digital skills gap, as a route to social and economic prosperity. The city hosts a series of festivals that run between the end of July and early September each year. The best known of these events are the Edinburgh Festival Fringe, the Edinburgh International Festival, the Edinburgh Military Tattoo, the Edinburgh Art Festival and the Edinburgh International Book Festival. The longest established of these festivals is the Edinburgh International Festival, which was first held in 1947 and consists mainly of a programme of high-profile theatre productions and classical music performances, featuring international directors, conductors, theatre companies and orchestras. This has since been overtaken in size by the Edinburgh Fringe which began as a programme of marginal acts alongside the "official" Festival and has become the world's largest performing arts festival. In 2017, nearly 3400 different shows were staged in 300 venues across the city. Comedy has become one of the mainstays of the Fringe, with numerous well-known comedians getting their first 'break' there, often by being chosen to receive the Edinburgh Comedy Award. The Edinburgh Military Tattoo, occupies the Castle Esplanade every night for three weeks each August, with massed pipe bands and military bands drawn from around the world. Performances end with a short fireworks display. As well as the summer festivals, many other festivals are held during the rest of the year, including the Edinburgh International Film Festival and Edinburgh International Science Festival. The summer of 2020 was the first time in its 70-year history that the Edinburgh festival was not run, being cancelled due to the COVID-19 pandemic. This affected many of the tourist-focused businesses in Edinburgh which depend on the various festivals over summer to return an annual profit. The annual Edinburgh Hogmanay celebration was originally an informal street party focused on the Tron Kirk in the Old Town's High Street. Since 1993, it has been officially organised with the focus moved to Princes Street. In 1996, over 300,000 people attended, leading to ticketing of the main street party in later years up to a limit of 100,000 tickets. Hogmanay now covers four days of processions, concerts and fireworks, with the street party beginning on Hogmanay. Alternative tickets are available for entrance into the Princes Street Gardens concert and Cèilidh, where well-known artists perform and ticket holders can participate in traditional Scottish cèilidh dancing. The event attracts thousands of people from all over the world. On the night of 30 April the Beltane Fire Festival takes place on Calton Hill, involving a procession followed by scenes inspired by pagan old spring fertility celebrations. At the beginning of October each year the Dussehra Hindu Festival is also held on Calton Hill. Outside the Festival season, Edinburgh supports several theatres and production companies. The Royal Lyceum Theatre has its own company, while the King's Theatre, Edinburgh Festival Theatre and Edinburgh Playhouse stage large touring shows. The Traverse Theatre presents a more contemporary repertoire. Amateur theatre companies productions are staged at the Bedlam Theatre, Church Hill Theatre and King's Theatre among others. The Usher Hall is Edinburgh's premier venue for classical music, as well as occasional popular music concerts. It was the venue for the Eurovision Song Contest 1972. Other halls staging music and theatre include The Hub, the Assembly Rooms and the Queen's Hall. The Scottish Chamber Orchestra is based in Edinburgh. Edinburgh has one repertory cinema, The Cameo, and formerly, the Edinburgh Filmhouse as well as the independent Dominion Cinema and a range of multiplexes. Edinburgh has a healthy popular music scene. Occasionally large concerts are staged at Murrayfield and Meadowbank, while mid-sized events take place at smaller venues such as 'The Corn Exchange', 'The Liquid Rooms' and 'The Bongo Club'. In 2010, PRS for Music listed Edinburgh among the UK's top ten 'most musical' cities. Several city pubs are well known for their live performances of folk music. They include 'Sandy Bell's' in Forrest Road, 'Captain's Bar' in South College Street and 'Whistlebinkies' in South Bridge. Like many other cities in the UK, numerous nightclub venues host Electronic dance music events. Edinburgh is home to a flourishing group of contemporary composers such as Nigel Osborne, Peter Nelson, Lyell Cresswell, Hafliði Hallgrímsson, Edward Harper, Robert Crawford, Robert Dow and John McLeod. McLeod's music is heard regularly on BBC Radio 3 and throughout the UK. The main local newspaper is the Edinburgh Evening News. It is owned and published alongside its sister titles The Scotsman and Scotland on Sunday by JPIMedia. The city has two commercial radio stations: Forth 1, a station which broadcasts mainstream chart music, and Forth 2 on medium wave which plays classic hits. Capital Scotland, Heart Scotland and Eklipse Sports Radio also have transmitters covering Edinburgh. Along with the UK national radio stations, BBC Radio Scotland and the Gaelic language service BBC Radio nan Gàidheal are also broadcast. DAB digital radio is broadcast over two local multiplexes. BFBS Radio broadcasts from studios on the base at Dreghorn Barracks across the city on 98.5FM as part of its UK Bases network. Small scale DAB started October 2022 with numerous community stations onboard Television, along with most radio services, is broadcast to the city from the Craigkelly transmitting station situated in Fife on the opposite side of the Firth of Forth and the Black Hill transmitting station in North Lanarkshire to the west. There are no television stations based in the city. Edinburgh Television existed in the late 1990s to early 2003 and STV Edinburgh existed from 2015 to 2018. Edinburgh has many museums and libraries. These include the National Museum of Scotland, the National Library of Scotland, National War Museum, the Museum of Edinburgh, Surgeons' Hall Museum, the Writers' Museum, the Museum of Childhood and Dynamic Earth. The Museum on The Mound has exhibits on money and banking. Edinburgh Zoo, covering 82 acres (33 ha) on Corstorphine Hill, is the second most visited paid tourist attraction in Scotland, and home to two giant pandas, Tian Tian and Yang Guang, on loan from the People's Republic of China. Edinburgh is also home to The Royal Yacht Britannia, decommissioned in 1997 and now a five-star visitor attraction and evening events venue permanently berthed at Ocean Terminal. Edinburgh contains Scotland's three National Galleries of Art as well as numerous smaller art galleries. The national collection is housed in the Scottish National Gallery, located on The Mound, comprising the linked National Gallery of Scotland building and the Royal Scottish Academy building. Contemporary collections are shown in the Scottish National Gallery of Modern Art which occupies a split site at Belford. The Scottish National Portrait Gallery on Queen Street focuses on portraits and photography. The council-owned City Art Centre in Market Street mounts regular art exhibitions. Across the road, The Fruitmarket Gallery offers world-class exhibitions of contemporary art, featuring work by British and international artists with both emerging and established international reputations. The city hosts several of Scotland's galleries and organisations dedicated to contemporary visual art. Significant strands of this infrastructure include Creative Scotland, Edinburgh College of Art, Talbot Rice Gallery (University of Edinburgh), Collective Gallery (based at the City Observatory) and the Edinburgh Annuale. There are also many small private shops/galleries that provide space to showcase works from local artists. The locale around Princes Street is the main shopping area in the city centre, with souvenir shops, chain stores such as Boots the Chemist, Edinburgh Woollen Mill, and H&M. George Street, north of Princes Street, has several upmarket shops and independent stores. At the east end of Princes Street, the redeveloped St James Quarter opened its doors in June 2021, while next to the Balmoral Hotel and Waverley Station is Waverley Market. Multrees Walk is a pedestrian shopping district, dominated by the presence of Harvey Nichols, and other names including Louis Vuitton, Mulberry and Michael Kors. Edinburgh also has substantial retail parks outside the city centre. These include The Gyle Shopping Centre and Hermiston Gait in the west of the city, Cameron Toll Shopping Centre, Straiton Retail Park (actually just outside the city, in Midlothian) and Fort Kinnaird in the south and east, and Ocean Terminal in the north on the Leith waterfront. Following local government reorganisation in 1996, the City of Edinburgh Council constitutes one of the 32 council areas of Scotland. Like all other local authorities of Scotland, the council has powers over most matters of local administration such as housing, planning, local transport, parks, economic development and regeneration. The council comprises 63 elected councillors, returned from 17 multi-member electoral wards in the city. Following the 2007 City of Edinburgh Council election the incumbent Labour Party lost majority control of the council after 23 years to a Liberal Democrat/SNP coalition. After the 2017 election, the SNP and Labour formed a coalition administration, which lasted until the next election in 2022. The 2022 City of Edinburgh Council election resulted in the most politically balanced council in the UK, with 19 SNP, 13 Labour, 12 Liberal Democrat, 10 Green and 9 Conservative councillors. A minority Labour administration was formed, being voted in by Scottish Conservative and Scottish Liberal Democrat councillors. The SNP and Greens presented a coalition agreement, but could not command majority support in the Council. This caused controversy amongst the Scottish Labour Party group for forming an administration supported by Conservatives and led to the suspension of two Labour councillors on the Council for abstaining on the vote to approve the new administration. The city's coat of arms was registered by the Lord Lyon King of Arms in 1732. Edinburgh, like all of Scotland, is represented in the Scottish Parliament, situated in the Holyrood area of the city. For electoral purposes, the city is divided into six constituencies which, along with 3 seats outside of the city, form part of the Lothian region. Each constituency elects one Member of the Scottish Parliament (MSP) by the first past the post system of election, and the region elects seven additional MSPs to produce a result based on a form of proportional representation. As of the 2021 election, the Scottish National Party have four MSPs: Ash Denham for Edinburgh Eastern, Ben Macpherson for Edinburgh Northern and Leith and Gordon MacDonald for Edinburgh Pentlands and Angus Robertson for Edinburgh Central constituencies. Alex Cole-Hamilton, the Leader of the Scottish Liberal Democrats represents Edinburgh Western and Daniel Johnson of the Scottish Labour Party represents Edinburgh Southern constituency. In addition, the city is also represented by seven regional MSPs representing the Lothian electoral region: The Conservatives have three regional MSPs: Jeremy Balfour, Miles Briggs and Sue Webber, Labour have two regional MSPs: Sarah Boyack and Foysol Choudhury; two Scottish Green regional MSPs were elected: Green's Co-Leader Lorna Slater and Alison Johnstone. However, following her election as the Presiding Officer of the 6th Session of the Scottish Parliament on 13 May 2021, Alison Johnstone has abided by the established parliamentary convention for speakers and renounced all affiliation with her former political party for the duration of her term as Presiding Officer. So she presently sits as an independent MSP for the Lothians Region. Edinburgh is also represented in the House of Commons of the United Kingdom by five Members of Parliament. The city is divided into Edinburgh North and Leith, Edinburgh East, Edinburgh South, Edinburgh South West, and Edinburgh West, each constituency electing one member by the first past the post system. Since the 2019 UK General election, Edinburgh is represented by three Scottish National Party MPs (Deirdre Brock, Edinburgh North and Leith/Tommy Sheppard, Edinburgh East/Joanna Cherry, Edinburgh South West), one Liberal Democrat MP in Edinburgh West (Christine Jardine) and one Labour MP in Edinburgh South (Ian Murray). Edinburgh Airport is Scotland's busiest airport and the principal international gateway to the capital, handling over 14.7 million passengers; it was also the sixth-busiest airport in the United Kingdom by total passengers in 2019. In anticipation of rising passenger numbers, the former operator of the airport BAA outlined a draft masterplan in 2011 to provide for the expansion of the airfield and the terminal building. In June 2012, Global Infrastructure Partners purchased the airport for £807 million. The possibility of building a second runway to cope with an increased number of aircraft movements has also been mooted. Travel in Edinburgh is undertaken predominantly by bus. Lothian Buses, the successor company to Edinburgh Corporation Transport Department, operate the majority of city bus services within the city and to surrounding suburbs, with the most routes running via Princes Street. Services further afield operate from the Edinburgh Bus Station off St Andrew Square and Waterloo Place and are operated mainly by Stagecoach East Scotland, Scottish Citylink, National Express Coaches and Borders Buses. Lothian Buses and McGill's Scotland East operate the city's branded public tour buses. The night bus service and airport buses are mainly operated by Lothian Buses link. In 2019, Lothian Buses recorded 124.2 million passenger journeys. To tackle traffic congestion, Edinburgh is now served by six park & ride sites on the periphery of the city at Sheriffhall (in Midlothian), Ingliston, Riccarton, Inverkeithing (in Fife), Newcraighall and Straiton (in Midlothian). A referendum of Edinburgh residents in February 2005 rejected a proposal to introduce congestion charging in the city. Edinburgh Waverley is the second-busiest railway station in Scotland, with only Glasgow Central handling more passengers. On the evidence of passenger entries and exits between April 2015 and March 2016, Edinburgh Waverley is the fifth-busiest station outside London; it is also the UK's second biggest station in terms of the number of platforms and area size. Waverley is the terminus for most trains arriving from London King's Cross and the departure point for many rail services within Scotland operated by ScotRail. To the west of the city centre lies Haymarket station, which is an important commuter stop. Opened in 2003, Edinburgh Park station serves the Gyle business park in the west of the city and the nearby Gogarburn headquarters of the Royal Bank of Scotland. The Edinburgh Crossrail route connects Edinburgh Park with Haymarket, Edinburgh Waverley and the suburban stations of Brunstane and Newcraighall in the east of the city. There are also commuter lines to Edinburgh Gateway, South Gyle and Dalmeny, the latter serving South Queensferry by the Forth Bridges, and to Wester Hailes and Curriehill in the south-west of the city. Edinburgh Trams became operational on 31 May 2014. The city had been without a tram system since Edinburgh Corporation Tramways ceased on 16 November 1956. Following parliamentary approval in 2007, construction began in early 2008. The first stage of the project was expected to be completed by July 2011 but, following delays caused by extra utility work and a long-running contractual dispute between the council and the main contractor, Bilfinger SE, the project was rescheduled. The line opened in 2014 but had been cut short to 8.7 mi (14.0 km) in length, running from Edinburgh Airport To York Place in the east end of the city. The line was later extended north onto Leith and Newhaven opening a further eight stops to passengers in June 2023. The York Place stop was replaced by a new island stop at Picardy Place. The original plan would have seen a second line run from Haymarket through Ravelston and Craigleith to Granton Square on the Waterfront Edinburgh.This was shelved in 2011 but is now once again under consideration, as is another line potentially linking the south of the city and the Bioquarter. There were also long-term plans for lines running west from the airport to Ratho and Newbridge and another connecting Granton to Newhaven via Lower Granton Road Lothian Buses and Edinburgh Trams are both owned and operated by Transport for Edinburgh. Despite its modern transport links, in January of 2021 Edinburgh was named the most congested city in the UK for the fourth year running, though has since fallen to 7th place in 2022 There are three universities in Edinburgh: the University of Edinburgh, Heriot-Watt University and Edinburgh Napier University. Established by royal charter in 1583, the University of Edinburgh is one of Scotland's ancient universities and is the fourth oldest in the country after St Andrews, Glasgow and Aberdeen. Originally centred on Old College the university expanded to premises on The Mound, the Royal Mile and George Square. Today, the King's Buildings in the south of the city contain most of the schools within the College of Science and Engineering. In 2002, the medical school moved to purpose built accommodation adjacent to the new Royal Infirmary of Edinburgh at Little France. The university is placed 16th in the QS World University Rankings for 2022. Heriot-Watt University is based at the Riccarton campus in the west of Edinburgh. Originally established in 1821, as the world's first mechanics' institute, it was granted university status by royal charter in 1966. It has other campuses in the Scottish Borders, Orkney, United Arab Emirates and Putrajaya in Malaysia. It takes the name Heriot-Watt from Scottish inventor James Watt and Scottish philanthropist and goldsmith George Heriot. Heriot-Watt University has been named International University of the Year by The Times and Sunday Times Good University Guide 2018. In the latest Research Excellence Framework, it was ranked overall in the Top 25% of UK universities and 1st in Scotland for research impact. Edinburgh Napier University was originally founded as the Napier College, which was renamed Napier Polytechnic in 1986 and gained university status in 1992. Edinburgh Napier University has campuses in the south and west of the city, including the former Merchiston Tower and Craiglockhart Hydropathic. It is home to the Screen Academy Scotland. Queen Margaret University was located in Edinburgh before it moved to a new campus just outside the city boundary on the edge of Musselburgh in 2008. Until 2012, further education colleges in the city included Jewel and Esk College (incorporating Leith Nautical College founded in 1903), Telford College, opened in 1968, and Stevenson College, opened in 1970. These have now been amalgamated to form Edinburgh College. Scotland's Rural College also has a campus in south Edinburgh. Other institutions include the Royal College of Surgeons of Edinburgh and the Royal College of Physicians of Edinburgh which were established by royal charter in 1506 and 1681 respectively. The Trustees Drawing Academy of Edinburgh, founded in 1760, became the Edinburgh College of Art in 1907. There are 18 nursery, 94 primary and 23 secondary schools administered by the City of Edinburgh Council. Edinburgh is home to The Royal High School, one of the oldest schools in the country and the world. The city also has several independent, fee-paying schools including Edinburgh Academy, Fettes College, George Heriot's School, George Watson's College, Merchiston Castle School, Stewart's Melville College and The Mary Erskine School. In 2009, the proportion of pupils attending independent schools was 24.2%, far above the Scottish national average of just over 7% and higher than in any other region of Scotland. In August 2013, the City of Edinburgh Council opened the city's first stand-alone Gaelic primary school, Bun-sgoil Taobh na Pàirce. The main NHS Lothian hospitals serving the Edinburgh area are the Royal Infirmary of Edinburgh, which includes the University of Edinburgh Medical School, and the Western General Hospital, which has a large cancer treatment centre and nurse-led Minor Injuries Clinic. The Royal Edinburgh Hospital in Morningside specialises in mental health. The Royal Hospital for Children and Young People, colloquially referred to as the Sick Kids, is a specialist paediatrics hospital. There are two private hospitals: Murrayfield Hospital in the west of the city and Shawfair Hospital in the south; both are owned by Spire Healthcare. Edinburgh has three football clubs that play in the Scottish Professional Football League (SPFL): Heart of Midlothian, founded in 1874, Hibernian, founded in 1875 and Edinburgh City F.C., founded in 1966. Heart of Midlothian and Hibernian are known locally as "Hearts" and "Hibs", respectively. Both play in the Scottish Premiership. They are the oldest city rivals in Scotland and the Edinburgh derby is one of the oldest derby matches in world football. Both clubs have won the Scottish league championship four times. Hearts have won the Scottish Cup eight times and the Scottish League Cup four times. Hibs have won the Scottish Cup and the Scottish League Cup three times each. Edinburgh City were promoted to Scottish League Two in the 2015–16 season, becoming the first club to win promotion to the SPFL via the pyramid system playoffs. Edinburgh was also home to four other former Scottish Football League clubs: the original Edinburgh City (founded in 1928), Leith Athletic, Meadowbank Thistle and St Bernard's. Meadowbank Thistle played at Meadowbank Stadium until 1995, when the club moved to Livingston and became Livingston F.C. The Scottish national team has very occasionally played at Easter Road and Tynecastle, although its normal home stadium is Hampden Park in Glasgow. St Bernard's' New Logie Green was used to host the 1896 Scottish Cup Final, the only time the match has been played outside Glasgow. The city also plays host to Lowland Football League clubs Civil Service Strollers, Edinburgh University and Spartans, as well as East of Scotland League clubs Craigroyston, Edinburgh United, Heriot-Watt University, Leith Athletic, Lothian Thistle Hutchison Vale, and Tynecastle. In women's football, Hearts, Hibs and Spartans play in the SWPL 1. Hutchison Vale and Boroughmuir Thistle play in the SWPL 2. The Scotland national rugby union team play at Murrayfield Stadium, and the professional Edinburgh Rugby team play at the nextdoor Edinburgh Rugby Stadium; both are owned by the Scottish Rugby Union and are also used for other events, including music concerts. Murrayfield is the largest capacity stadium in Scotland, seating 67,144 spectators. Edinburgh is also home to Scottish Premiership teams Boroughmuir RFC, Currie RFC, the Edinburgh Academicals, Heriot's Rugby Club and Watsonians RFC. The Edinburgh Academicals ground at Raeburn Place was the location of the world's first international rugby game on 27 March 1871, between Scotland and England. Rugby league is represented by the Edinburgh Eagles who play in the Rugby League Conference Scotland Division. Murrayfield Stadium has hosted the Magic Weekend where all Super League matches are played in the stadium over one weekend. The Scottish cricket team, which represents Scotland internationally, play their home matches at the Grange cricket club. The Edinburgh Capitals are the latest of a succession of ice hockey clubs in the Scottish capital. Previously Edinburgh was represented by the Murrayfield Racers (2018), the original Murrayfield Racers (who folded in 1996) and the Edinburgh Racers. The club play their home games at the Murrayfield Ice Rink and have competed in the eleven-team professional Scottish National League (SNL) since the 2018–19 season. Next door to Murrayfield Ice Rink is a 7-sheeter dedicated curling facility where curling is played from October to March each season. Caledonia Pride are the only women's professional basketball team in Scotland. Established in 2016, the team compete in the UK wide Women's British Basketball League and play their home matches at the Oriam National Performance Centre. Edinburgh also has several men's basketball teams within the Scottish National League. Boroughmuir Blaze, City of Edinburgh Kings and Edinburgh Lions all compete in Division 1 of the National League, and Pleasance B.C. compete in Division 2. The Edinburgh Diamond Devils is a baseball club which won its first Scottish Championship in 1991 as the "Reivers." 1992 saw the team repeat the achievement, becoming the first team to do so in league history. The same year saw the start of their first youth team, the Blue Jays. The club adopted its present name in 1999. Edinburgh has also hosted national and international sports events including the World Student Games, the 1970 British Commonwealth Games, the 1986 Commonwealth Games and the inaugural 2000 Commonwealth Youth Games. For the 1970 Games the city built Olympic standard venues and facilities including Meadowbank Stadium and the Royal Commonwealth Pool. The Pool underwent refurbishment in 2012 and hosted the Diving competition in the 2014 Commonwealth Games which were held in Glasgow. In American football, the Scottish Claymores played WLAF/NFL Europe games at Murrayfield, including their World Bowl 96 victory. From 1995 to 1997 they played all their games there, from 1998 to 2000 they split their home matches between Murrayfield and Glasgow's Hampden Park, then moved to Glasgow full-time, with one final Murrayfield appearance in 2002. The city's most successful non-professional team are the Edinburgh Wolves who play at Meadowbank Stadium. The Edinburgh Marathon has been held annually in the city since 2003 with more than 16,000 runners taking part on each occasion. Its organisers have called it "the fastest marathon in the UK" due to the elevation drop of 40 m (130 ft). The city also organises a half-marathon, as well as 10 km (6.2 mi) and 5 km (3.1 mi) races, including a 5 km (3 mi) race on 1 January each year. Edinburgh has a speedway team, the Edinburgh Monarchs, which, since the loss of its stadium in the city, has raced at the Lothian Arena in Armadale, West Lothian. The Monarchs have won the Premier League championship five times in their history, in 2003 and again in 2008, 2010, 2014 and 2015. For basketball, the city has a basketball club, Edinburgh Tigers. Edinburgh has a long literary tradition, which became especially evident during the Scottish Enlightenment. This heritage and the city's lively literary life in the present led to it being declared the first UNESCO City of Literature in 2004. Prominent authors who have lived in Edinburgh include the economist Adam Smith, born in Kirkcaldy and author of The Wealth of Nations, James Boswell, biographer of Samuel Johnson; Sir Walter Scott, creator of the historical novel and author of works such as Rob Roy, Ivanhoe, and Heart of Midlothian; James Hogg, author of The Private Memoirs and Confessions of a Justified Sinner; Robert Louis Stevenson, creator of Treasure Island, Kidnapped, and Strange Case of Dr Jekyll and Mr Hyde; Sir Arthur Conan Doyle, the creator of Sherlock Holmes; Muriel Spark, author of The Prime of Miss Jean Brodie; diarist Janet Harden; Irvine Welsh, author of Trainspotting, whose novels are mostly set in the city and often written in colloquial Scots; Ian Rankin, author of the Inspector Rebus series of crime thrillers, Alexander McCall Smith, author of the No. 1 Ladies' Detective Agency series, and J. K. Rowling, author of Harry Potter, who wrote much of her first book in Edinburgh coffee shops and now lives in the Cramond area of the city. Scotland has a rich history of science and engineering, with Edinburgh producing a number of leading figures. John Napier, inventor of logarithms, was born in Merchiston Tower and lived and died in the city. His house now forms part of the original campus of Napier University which was named in his honour. He lies buried under St. Cuthbert's Church. James Clerk Maxwell, founder of the modern theory of electromagnetism, was born at 14 India Street (now the home of the James Clerk Maxwell Foundation) and educated at the Edinburgh Academy and the University of Edinburgh, as was the engineer and telephone pioneer Alexander Graham Bell. James Braidwood, who organised Britain's first municipal fire brigade, was also born in the city and began his career there. Other names connected with the city include physicist Max Born, a principle founder of Quantum mechanics and Nobel laureate; Charles Darwin, the biologist who propounded the theory of natural selection; David Hume, philosopher, economist and historian; James Hutton, regarded as the "Father of Geology"; Joseph Black, the chemist who discovered Magnesium and Carbon Dioxide, and one of the founders of Thermodynamics; pioneering medical researchers Joseph Lister and James Young Simpson; chemist and discoverer of the element nitrogen Daniel Rutherford; Colin Maclaurin, mathematician and developer of the Maclaurin series, and Ian Wilmut, the geneticist involved in the cloning of Dolly the sheep just outside Edinburgh, at the Roslin Institute. The stuffed carcass of Dolly the sheep is now on display in the National Museum of Scotland. The latest in a long line of science celebrities associated with the city is theoretical physicist, Nobel laureate and professor emeritus at the University of Edinburgh Peter Higgs, born in Newcastle but resident in Edinburgh for most of his academic career, after whom the Higgs boson particle has been named. Edinburgh has been the birthplace of actors like Alastair Sim and Sir Sean Connery, known for being the first cinematic James Bond, the comedian and actor Ronnie Corbett, best known as one of The Two Ronnies, and the impressionist Rory Bremner. Famous artists from the city include the portrait painters Sir Henry Raeburn, Sir David Wilkie and Allan Ramsay. The city has produced or been home to some very successful musicians in recent decades, particularly Ian Anderson, front man of the band Jethro Tull, The Incredible String Band, the folk duo The Corries, Wattie Buchan, lead singer and founding member of punk band The Exploited, Shirley Manson, lead singer of the band Garbage, the Bay City Rollers, The Proclaimers, Boards of Canada and Idlewild. Edinburgh is the birthplace of former British Prime Minister Tony Blair who attended the city's Fettes College. Notorious criminals from Edinburgh's past include Deacon Brodie, head of a trades guild and Edinburgh city councillor by day but a burglar by night, who is said to have been the inspiration for Robert Louis Stevenson's story, the Strange Case of Dr Jekyll and Mr Hyde, and murderers Burke and Hare who delivered fresh corpses for dissection to the famous anatomist Robert Knox. Another well-known Edinburgh resident was Greyfriars Bobby. The small Skye Terrier reputedly kept vigil over his dead master's grave in Greyfriars Kirkyard for 14 years in the 1860s and 1870s, giving rise to a story of canine devotion which plays a part in attracting visitors to the city. The City of Edinburgh has entered into 14 international twinning arrangements since 1954. Most of the arrangements are styled as Twin Cities but the agreement with Kraków is designated as a Partner City, and the agreement with Kyoto Prefecture is officially styled as a Friendship Link, reflecting its status as the only region to be twinned with Edinburgh. For a list of consulates in Edinburgh, see List of diplomatic missions in Scotland.
[ { "paragraph_id": 0, "text": "Edinburgh (/ˈɛdɪnbərə/ Scots: [ˈɛdɪnbʌrə]; Scottish Gaelic: Dùn Èideann [ˌt̪un ˈeːtʲən̪ˠ]) is the capital city of Scotland and one of its 32 council areas. The city is located in south-east Scotland, and is bounded to the north by the Firth of Forth estuary and to the south by the Pentland Hills. Edinburgh had a population of 506,520 in mid-2020, making it the second-most populous city in Scotland and the seventh-most populous in the United Kingdom.", "title": "" }, { "paragraph_id": 1, "text": "Recognised as the capital of Scotland since at least the 15th century, Edinburgh is the seat of the Scottish Government, the Scottish Parliament, the highest courts in Scotland, and the Palace of Holyroodhouse, the official residence of the British monarch in Scotland. It is also the annual venue of the General Assembly of the Church of Scotland. The city has long been a centre of education, particularly in the fields of medicine, Scottish law, literature, philosophy, the sciences and engineering. The University of Edinburgh, founded in 1582 and now one of three in the city, is considered one of the best research institutions in the world. It is the second-largest financial centre in the United Kingdom, the fourth largest in Europe, and the thirteenth largest internationally.", "title": "" }, { "paragraph_id": 2, "text": "The city is a cultural centre, and is the home of institutions including the National Museum of Scotland, the National Library of Scotland and the Scottish National Gallery. The city is also known for the Edinburgh International Festival and the Fringe, the latter being the world's largest annual international arts festival. Historic sites in Edinburgh include Edinburgh Castle, the Palace of Holyroodhouse, the churches of St. Giles, Greyfriars and the Canongate, and the extensive Georgian New Town built in the 18th/19th centuries. Edinburgh's Old Town and New Town together are listed as a UNESCO World Heritage Site, which has been managed by Edinburgh World Heritage since 1999. The city's historical and cultural attractions have made it the UK's second-most visited tourist destination, attracting 4.9 million visits, including 2.4 million from overseas in 2018.", "title": "" }, { "paragraph_id": 3, "text": "Edinburgh is governed by the City of Edinburgh Council, a unitary authority. The City of Edinburgh council area had an estimated population of 526,470 in mid-2021, and includes outlying towns and villages which are not part of Edinburgh proper. The city is in the Lothian region and was historically part of the shire of Midlothian (also called Edinburghshire).", "title": "" }, { "paragraph_id": 4, "text": "\"Edin\", the root of the city's name, derives from Eidyn, the name for the region in Cumbric, the Brittonic Celtic language formerly spoken there. The name's meaning is unknown. The district of Eidyn was centred on the stronghold of Din Eidyn, the dun or hillfort of Eidyn. This stronghold is believed to have been located at Castle Rock, now the site of Edinburgh Castle. A siege of Din Eidyn by Oswald, king of the Angles of Northumbria in 638 marked the beginning of three centuries of Germanic influence in south east Scotland that laid the foundations for the development of Scots, before the town was ultimately subsumed in 954 by the kingdom known to the English as Scotland. As the language shifted from Cumbric to Northumbrian Old English and then Scots, the Brittonic din in Din Eidyn was replaced by burh, producing Edinburgh. In Scottish Gaelic din becomes dùn, producing modern Dùn Èideann.", "title": "Etymology" }, { "paragraph_id": 5, "text": "The city is affectionately nicknamed Auld Reekie, Scots for Old Smoky, for the views from the country of the smoke-covered Old Town. In Walter Scott's 1820 novel The Abbot, a character observes that \"yonder stands Auld Reekie—you may see the smoke hover over her at twenty miles' distance\". In 1898, Thomas Carlyle comments on the phenomenon: \"Smoke cloud hangs over old Edinburgh, for, ever since Aeneas Silvius's time and earlier, the people have the art, very strange to Aeneas, of burning a certain sort of black stones, and Edinburgh with its chimneys is called 'Auld Reekie' by the country people\". 19th-century historian Robert Chambers argued that the sobriquet could not be traced before the reign of Charles II in the late 17th century. Instead, he attributed the name to a Fife laird, Durham of Largo, who regulated the bedtime of his children by the smoke rising above Edinburgh from the fires of the tenements. \"It's time now bairns, to tak' the beuks, and gang to our beds, for yonder's Auld Reekie, I see, putting on her nicht -cap!\".", "title": "Nicknames" }, { "paragraph_id": 6, "text": "Edinburgh has been popularly called the Athens of the North since the early 19th century. References to Athens, such as Athens of Britain and Modern Athens, had been made as early as the 1760s. The similarities were seen to be topographical but also intellectual. Edinburgh's Castle Rock reminded returning grand tourists of the Athenian Acropolis, as did aspects of the neoclassical architecture and layout of New Town. Both cities had flatter, fertile agricultural land sloping down to a port several miles away (respectively, Leith and Piraeus). Intellectually, the Scottish Enlightenment, with its humanist and rationalist outlook, was influenced by Ancient Greek philosophy. In 1822, artist Hugh William Williams organized an exhibition that showed his paintings of Athens alongside views of Edinburgh, and the idea of a direct parallel between both cities quickly caught the popular imagination. When plans were drawn up in the early 19th century to architecturally develop Calton Hill, the design of the National Monument directly copied Athens' Parthenon. Tom Stoppard's character Archie of Jumpers said, perhaps playing on Reykjavík meaning \"smoky bay\", that the \"Reykjavík of the South\" would be more appropriate.", "title": "Nicknames" }, { "paragraph_id": 7, "text": "The city has also been known by several Latin names, such as Edinburgum, while the adjectival forms Edinburgensis and Edinensis are used in educational and scientific contexts.", "title": "Nicknames" }, { "paragraph_id": 8, "text": "Edina is a late 18th-century poetical form used by the Scots poets Robert Fergusson and Robert Burns. \"Embra\" or \"Embro\" are colloquialisms from the same time, as in Robert Garioch's Embro to the Ploy.", "title": "Nicknames" }, { "paragraph_id": 9, "text": "Ben Jonson described it as \"Britaine's other eye\", and Sir Walter Scott referred to it as \"yon Empress of the North\". Robert Louis Stevenson, also a son of the city, wrote that Edinburgh \"is what Paris ought to be\".", "title": "Nicknames" }, { "paragraph_id": 10, "text": "The earliest known human habitation in the Edinburgh area was at Cramond, where evidence was found of a Mesolithic camp site dated to c. 8500 BC. Traces of later Bronze Age and Iron Age settlements have been found on Castle Rock, Arthur's Seat, Craiglockhart Hill and the Pentland Hills.", "title": "History" }, { "paragraph_id": 11, "text": "When the Romans arrived in Lothian at the end of the 1st century AD, they found a Brittonic Celtic tribe whose name they recorded as the Votadini. The Votadini transitioned into the Gododdin kingdom in the Early Middle Ages, with Eidyn serving as one of the kingdom's districts. During this period, the Castle Rock site, thought to have been the stronghold of Din Eidyn, emerged as the kingdom's major centre. The medieval poem Y Gododdin describes a war band from across the Brittonic world who gathered in Eidyn before a fateful raid; this may describe a historical event around AD 600.", "title": "History" }, { "paragraph_id": 12, "text": "In 638, the Gododdin stronghold was besieged by forces loyal to King Oswald of Northumbria, and around this time control of Lothian passed to the Angles. Their influence continued for the next three centuries until around 950, when, during the reign of Indulf, son of Constantine II, the \"burh\" (fortress), named in the 10th-century Pictish Chronicle as oppidum Eden, was abandoned to the Scots. It thenceforth remained, for the most part, under their jurisdiction.", "title": "History" }, { "paragraph_id": 13, "text": "The royal burgh was founded by King David I in the early 12th century on land belonging to the Crown, though the date of its charter is unknown. The first documentary evidence of the medieval burgh is a royal charter, c. 1124–1127, by King David I granting a toft in burgo meo de Edenesburg to the Priory of Dunfermline. The shire of Edinburgh seems to have also been created in the reign of David I, possibly covering all of Lothian at first, but by 1305 the eastern and western parts of Lothian had become Haddingtonshire and Linlithgowshire, leaving Edinburgh as the county town of a shire covering the central part of Lothian, which was called Edinburghshire or Midlothian (the latter name being an informal, but commonly used, alternative until the county's name was legally changed in 1947).", "title": "History" }, { "paragraph_id": 14, "text": "Edinburgh was largely under English control from 1291 to 1314 and from 1333 to 1341, during the Wars of Scottish Independence. When the English invaded Scotland in 1298, Edward I of England chose not to enter Edinburgh but passed by it with his army.", "title": "History" }, { "paragraph_id": 15, "text": "In the middle of the 14th century, the French chronicler Jean Froissart described it as the capital of Scotland (c. 1365), and James III (1451–88) referred to it in the 15th century as \"the principal burgh of our kingdom\". In 1482 James III \"granted and perpetually confirmed to the said Provost, Bailies, Clerk, Council, and Community, and their successors, the office of Sheriff within the Burgh for ever, to be exercised by the Provost for the time as Sheriff, and by the Bailies for the time as Sheriffsdepute conjunctly and severally; with full power to hold Courts, to punish transgressors not only by banishment but by death, to appoint officers of Court, and to do everything else appertaining to the office of Sheriff; as also to apply to their own proper use the fines and escheats arising out of the exercise of the said office.\" Despite being burnt by the English in 1544, Edinburgh continued to develop and grow, and was at the centre of events in the 16th-century Scottish Reformation and 17th-century Wars of the Covenant. In 1582, Edinburgh's town council was given a royal charter by King James VI permitting the establishment of a university; founded as Tounis College (Town's College), the institution developed into the University of Edinburgh, which contributed to Edinburgh's central intellectual role in subsequent centuries.", "title": "History" }, { "paragraph_id": 16, "text": "In 1603, King James VI of Scotland succeeded to the English throne, uniting the crowns of Scotland and England in a personal union known as the Union of the Crowns, though Scotland remained, in all other respects, a separate kingdom. In 1638, King Charles I's attempt to introduce Anglican church forms in Scotland encountered stiff Presbyterian opposition culminating in the conflicts of the Wars of the Three Kingdoms. Subsequent Scottish support for Charles Stuart's restoration to the throne of England resulted in Edinburgh's occupation by Oliver Cromwell's Commonwealth of England forces – the New Model Army – in 1650.", "title": "History" }, { "paragraph_id": 17, "text": "In the 17th century, Edinburgh's boundaries were still defined by the city's defensive town walls. As a result, the city's growing population was accommodated by increasing the height of the houses. Buildings of 11 storeys or more were common, and have been described as forerunners of the modern-day skyscraper. Most of these old structures were replaced by the predominantly Victorian buildings seen in today's Old Town. In 1611 an act of parliament created the High Constables of Edinburgh to keep order in the city, thought to be the oldest statutory police force in the world.", "title": "History" }, { "paragraph_id": 18, "text": "Following the Treaty of Union in 1706, the Parliaments of England and Scotland passed Acts of Union in 1706 and 1707 respectively, uniting the two kingdoms in the Kingdom of Great Britain effective from 1 May 1707. As a consequence, the Parliament of Scotland merged with the Parliament of England to form the Parliament of Great Britain, which sat at Westminster in London. The Union was opposed by many Scots, resulting in riots in the city.", "title": "History" }, { "paragraph_id": 19, "text": "By the first half of the 18th century, Edinburgh was described as one of Europe's most densely populated, overcrowded and unsanitary towns. Visitors were struck by the fact that the social classes shared the same urban space, even inhabiting the same tenement buildings; although here a form of social segregation did prevail, whereby shopkeepers and tradesmen tended to occupy the cheaper-to-rent cellars and garrets, while the more well-to-do professional classes occupied the more expensive middle storeys.", "title": "History" }, { "paragraph_id": 20, "text": "During the Jacobite rising of 1745, Edinburgh was briefly occupied by the Jacobite \"Highland Army\" before its march into England. After its eventual defeat at Culloden, there followed a period of reprisals and pacification, largely directed at the rebellious clans. In Edinburgh, the Town Council, keen to emulate London by initiating city improvements and expansion to the north of the castle, reaffirmed its belief in the Union and loyalty to the Hanoverian monarch George III by its choice of names for the streets of the New Town: for example, Rose Street and Thistle Street; and for the royal family, George Street, Queen Street, Hanover Street, Frederick Street and Princes Street (in honour of George's two sons). The consistently geometric layout of the plan for the extension of Edinburgh was the result of a major competition in urban planning staged by the Town Council in 1766.", "title": "History" }, { "paragraph_id": 21, "text": "In the second half of the century, the city was at the heart of the Scottish Enlightenment, when thinkers like David Hume, Adam Smith, James Hutton and Joseph Black were familiar figures in its streets. Edinburgh became a major intellectual centre, earning it the nickname \"Athens of the North\" because of its many neo-classical buildings and reputation for learning, recalling ancient Athens. In the 18th-century novel The Expedition of Humphry Clinker by Tobias Smollett one character describes Edinburgh as a \"hotbed of genius\". Edinburgh was also a major centre for the Scottish book trade. The highly successful London bookseller Andrew Millar was apprenticed there to James McEuen.", "title": "History" }, { "paragraph_id": 22, "text": "From the 1770s onwards, the professional and business classes gradually deserted the Old Town in favour of the more elegant \"one-family\" residences of the New Town, a migration that changed the city's social character. According to the foremost historian of this development, \"Unity of social feeling was one of the most valuable heritages of old Edinburgh, and its disappearance was widely and properly lamented.\"", "title": "History" }, { "paragraph_id": 23, "text": "Despite an enduring myth to the contrary, Edinburgh became an industrial centre with its traditional industries of printing, brewing and distilling continuing to grow in the 19th century and joined by new industries such as rubber works, engineering works and others. By 1821, Edinburgh had been overtaken by Glasgow as Scotland's largest city. The city centre between Princes Street and George Street became a major commercial and shopping district, a development partly stimulated by the arrival of railways in the 1840s. The Old Town became an increasingly dilapidated, overcrowded slum with high mortality rates. Improvements carried out under Lord Provost William Chambers in the 1860s began the transformation of the area into the predominantly Victorian Old Town seen today. More improvements followed in the early 20th century as a result of the work of Patrick Geddes, but relative economic stagnation during the two world wars and beyond saw the Old Town deteriorate further before major slum clearance in the 1960s and 1970s began to reverse the process. University building developments which transformed the George Square and Potterrow areas proved highly controversial.", "title": "History" }, { "paragraph_id": 24, "text": "Since the 1990s a new \"financial district\", including the Edinburgh International Conference Centre, has grown mainly on demolished railway property to the west of the castle, stretching into Fountainbridge, a run-down 19th-century industrial suburb which has undergone radical change since the 1980s with the demise of industrial and brewery premises. This ongoing development has enabled Edinburgh to maintain its place as the United Kingdom's second largest financial and administrative centre after London. Financial services now account for a third of all commercial office space in the city. The development of Edinburgh Park, a new business and technology park covering 38 acres (15 ha), 4 mi (6 km) west of the city centre, has also contributed to the District Council's strategy for the city's major economic regeneration.", "title": "History" }, { "paragraph_id": 25, "text": "In 1998, the Scotland Act, which came into force the following year, established a devolved Scottish Parliament and Scottish Executive (renamed the Scottish Government since September 2007). Both based in Edinburgh, they are responsible for governing Scotland while reserved matters such as defence, foreign affairs and some elements of income tax remain the responsibility of the Parliament of the United Kingdom in London.", "title": "History" }, { "paragraph_id": 26, "text": "In 2022, Edinburgh was affected by the 2022 Scotland bin strikes. In 2023, Edinburgh became the first capital city in Europe to sign the global Plant Based Treaty, which was introduced at COP26 in 2021 in Glasgow. Green Party councillor Steve Burgess introduced the treaty. The Scottish Countryside Alliance and other farming groups called the treaty \"anti-farming.\"", "title": "History" }, { "paragraph_id": 27, "text": "Situated in Scotland's Central Belt, Edinburgh lies on the southern shore of the Firth of Forth. The city centre is 2+1⁄2 mi (4.0 km) southwest of the shoreline of Leith and 26 mi (42 km) inland, as the crow flies, from the east coast of Scotland and the North Sea at Dunbar. While the early burgh grew up near the prominent Castle Rock, the modern city is often said to be built on seven hills, namely Calton Hill, Corstorphine Hill, Craiglockhart Hill, Braid Hill, Blackford Hill, Arthur's Seat and the Castle Rock, giving rise to allusions to the seven hills of Rome.", "title": "Geography" }, { "paragraph_id": 28, "text": "Occupying a narrow gap between the Firth of Forth to the north and the Pentland Hills and their outrunners to the south, the city sprawls over a landscape which is the product of early volcanic activity and later periods of intensive glaciation. Igneous activity between 350 and 400 million years ago, coupled with faulting, led to the creation of tough basalt volcanic plugs, which predominate over much of the area. One such example is the Castle Rock which forced the advancing ice sheet to divide, sheltering the softer rock and forming a 1 mi-long (1.6 km) tail of material to the east, thus creating a distinctive crag and tail formation. Glacial erosion on the north side of the crag gouged a deep valley later filled by the now drained Nor Loch. These features, along with another hollow on the rock's south side, formed an ideal natural strongpoint upon which Edinburgh Castle was built. Similarly, Arthur's Seat is the remains of a volcano dating from the Carboniferous period, which was eroded by a glacier moving west to east during the ice age. Erosive action such as plucking and abrasion exposed the rocky crags to the west before leaving a tail of deposited glacial material swept to the east. This process formed the distinctive Salisbury Crags, a series of teschenite cliffs between Arthur's Seat and the location of the early burgh. The residential areas of Marchmont and Bruntsfield are built along a series of drumlin ridges south of the city centre, which were deposited as the glacier receded.", "title": "Geography" }, { "paragraph_id": 29, "text": "Other prominent landforms such as Calton Hill and Corstorphine Hill are also products of glacial erosion. The Braid Hills and Blackford Hill are a series of small summits to the south of the city centre that command expansive views looking northwards over the urban area to the Firth of Forth.", "title": "Geography" }, { "paragraph_id": 30, "text": "Edinburgh is drained by the river named the Water of Leith, which rises at the Colzium Springs in the Pentland Hills and runs for 18 miles (29 km) through the south and west of the city, emptying into the Firth of Forth at Leith. The nearest the river gets to the city centre is at Dean Village on the north-western edge of the New Town, where a deep gorge is spanned by Thomas Telford's Dean Bridge, built in 1832 for the road to Queensferry. The Water of Leith Walkway is a mixed-use trail that follows the course of the river for 19.6 km (12.2 mi) from Balerno to Leith.", "title": "Geography" }, { "paragraph_id": 31, "text": "Excepting the shoreline of the Firth of Forth, Edinburgh is encircled by a green belt, designated in 1957, which stretches from Dalmeny in the west to Prestongrange in the east. With an average width of 3.2 km (2 mi) the principal objectives of the green belt were to contain the outward expansion of the city and to prevent the agglomeration of urban areas. Expansion affecting the green belt is strictly controlled but developments such as Edinburgh Airport and the Royal Highland Showground at Ingliston lie within the zone. Similarly, suburbs such as Juniper Green and Balerno are situated on green belt land. One feature of the Edinburgh green belt is the inclusion of parcels of land within the city which are designated green belt, even though they do not connect with the peripheral ring. Examples of these independent wedges of green belt include Holyrood Park and Corstorphine Hill.", "title": "Geography" }, { "paragraph_id": 32, "text": "Edinburgh includes former towns and villages that retain much of their original character as settlements in existence before they were absorbed into the expanding city of the nineteenth and twentieth centuries. Many areas, such as Dalry, contain residences that are multi-occupancy buildings known as tenements, although the more southern and western parts of the city have traditionally been less built-up with a greater number of detached and semi-detached villas.", "title": "Geography" }, { "paragraph_id": 33, "text": "The historic centre of Edinburgh is divided in two by the broad green swathe of Princes Street Gardens. To the south, the view is dominated by Edinburgh Castle, built high on Castle Rock, and the long sweep of the Old Town descending towards Holyrood Palace. To the north lie Princes Street and the New Town.", "title": "Geography" }, { "paragraph_id": 34, "text": "The West End includes the financial district, with insurance and banking offices as well as the Edinburgh International Conference Centre.", "title": "Geography" }, { "paragraph_id": 35, "text": "Edinburgh's Old and New Towns were listed as a UNESCO World Heritage Site in 1995 in recognition of the unique character of the Old Town with its medieval street layout and the planned Georgian New Town, including the adjoining Dean Village and Calton Hill areas. There are over 4,500 listed buildings within the city, a higher proportion relative to area than any other city in the United Kingdom.", "title": "Geography" }, { "paragraph_id": 36, "text": "The castle is perched on top of a rocky crag (the remnant of an extinct volcano) and the Royal Mile runs down the crest of a ridge from it terminating at Holyrood Palace. Minor streets (called closes or wynds) lie on either side of the main spine forming a herringbone pattern. Due to space restrictions imposed by the narrowness of this landform, the Old Town became home to some of the earliest \"high rise\" residential buildings. Multi-storey dwellings known as lands were the norm from the 16th century onwards with ten and eleven storeys being typical and one even reaching fourteen or fifteen storeys. Numerous vaults below street level were inhabited to accommodate the influx of incomers, particularly Irish immigrants, during the Industrial Revolution. The street has several fine public buildings such as St Giles' Cathedral, the City Chambers and the Law Courts. Other places of historical interest nearby are Greyfriars Kirkyard and Mary King's Close. The Grassmarket, running deep below the castle is connected by the steep double terraced Victoria Street. The street layout is typical of the old quarters of many Northern European cities.", "title": "Geography" }, { "paragraph_id": 37, "text": "The New Town was an 18th-century solution to the problem of an increasingly crowded city which had been confined to the ridge sloping down from the castle. In 1766 a competition to design a \"New Town\" was won by James Craig, a 27-year-old architect. The plan was a rigid, ordered grid, which fitted in well with Enlightenment ideas of rationality. The principal street was to be George Street, running along the natural ridge to the north of what became known as the \"Old Town\". To either side of it are two other main streets: Princes Street and Queen Street. Princes Street has become Edinburgh's main shopping street and now has few of its Georgian buildings in their original state. The three main streets are connected by a series of streets running perpendicular to them. The east and west ends of George Street are terminated by St Andrew Square and Charlotte Square respectively. The latter, designed by Robert Adam, influenced the architectural style of the New Town into the early 19th century. Bute House, the official residence of the First Minister of Scotland, is on the north side of Charlotte Square.", "title": "Geography" }, { "paragraph_id": 38, "text": "The hollow between the Old and New Towns was formerly the Nor Loch, which was created for the town's defence but came to be used by the inhabitants for dumping their sewage. It was drained by the 1820s as part of the city's northward expansion. Craig's original plan included an ornamental canal on the site of the loch, but this idea was abandoned. Soil excavated while laying the foundations of buildings in the New Town was dumped on the site of the loch to create the slope connecting the Old and New Towns known as The Mound.", "title": "Geography" }, { "paragraph_id": 39, "text": "In the middle of the 19th century the National Gallery of Scotland and Royal Scottish Academy Building were built on The Mound, and tunnels for the railway line between Haymarket and Waverley stations were driven through it.", "title": "Geography" }, { "paragraph_id": 40, "text": "The Southside is a residential part of the city, which includes the districts of St Leonards, Marchmont, Morningside, Newington, Sciennes, the Grange and Blackford. The Southside is broadly analogous to the area covered formerly by the Burgh Muir, and was developed as a residential area after the opening of the South Bridge in the 1780s. The Southside is particularly popular with families (many state and private schools are here), young professionals and students (the central University of Edinburgh campus is based around George Square just north of Marchmont and the Meadows), and Napier University (with major campuses around Merchiston and Morningside). The area is also well provided with hotel and \"bed and breakfast\" accommodation for visiting festival-goers. These districts often feature in works of fiction. For example, Church Hill in Morningside, was the home of Muriel Spark's Miss Jean Brodie, and Ian Rankin's Inspector Rebus lives in Marchmont and works in St Leonards.", "title": "Geography" }, { "paragraph_id": 41, "text": "Leith was historically the port of Edinburgh, an arrangement of unknown date that was confirmed by the royal charter Robert the Bruce granted to the city in 1329. The port developed a separate identity from Edinburgh, which to some extent it still retains, and it was a matter of great resentment when the two burghs merged in 1920 into the City of Edinburgh. Even today the parliamentary seat is known as \"Edinburgh North and Leith\". The loss of traditional industries and commerce (the last shipyard closed in 1983) resulted in economic decline. The Edinburgh Waterfront development has transformed old dockland areas from Leith to Granton into residential areas with shopping and leisure facilities and helped rejuvenate the area. With the redevelopment, Edinburgh has gained the business of cruise liner companies which now provide cruises to Norway, Sweden, Denmark, Germany, and the Netherlands.", "title": "Geography" }, { "paragraph_id": 42, "text": "The coastal suburb of Portobello is characterised by Georgian villas, Victorian tenements, a beach and promenade and cafés, bars, restaurants and independent shops. There are rowing and sailing clubs and a restored Victorian swimming pool, including Turkish baths.", "title": "Geography" }, { "paragraph_id": 43, "text": "The urban area of Edinburgh is almost entirely within the City of Edinburgh Council boundary, merging with Musselburgh in East Lothian. Towns within easy reach of the city boundary include Inverkeithing, Haddington, Tranent, Prestonpans, Dalkeith, Bonnyrigg, Loanhead, Penicuik, Broxburn, Livingston and Dunfermline. Edinburgh lies at the heart of the Edinburgh & South East Scotland City region with a population in 2014 of 1,339,380.", "title": "Geography" }, { "paragraph_id": 44, "text": "Like most of Scotland, Edinburgh has a cool, temperate, maritime climate which, despite its northerly latitude, is milder than places which lie at similar latitudes such as Moscow and Labrador. The city's proximity to the sea mitigates any large variations in temperature or extremes of climate. Winter daytime temperatures rarely fall below freezing while summer temperatures are moderate, rarely exceeding 22 °C (72 °F). The highest temperature recorded in the city was 31.6 °C (88.9 °F) on 25 July 2019 at Gogarbank, beating the previous record of 31 °C (88 °F) on 4 August 1975 at Edinburgh Airport. The lowest temperature recorded in recent years was −14.6 °C (5.7 °F) during December 2010 at Gogarbank.", "title": "Geography" }, { "paragraph_id": 45, "text": "Given Edinburgh's position between the coast and hills, it is renowned as \"the windy city\", with the prevailing wind direction coming from the south-west, which is often associated with warm, unstable air from the North Atlantic Current that can give rise to rainfall – although considerably less than cities to the west, such as Glasgow. Rainfall is distributed fairly evenly throughout the year. Winds from an easterly direction are usually drier but considerably colder, and may be accompanied by haar, a persistent coastal fog. Vigorous Atlantic depressions, known as European windstorms, can affect the city between October and May.", "title": "Geography" }, { "paragraph_id": 46, "text": "Located slightly north of the city centre, the weather station at the Royal Botanic Garden Edinburgh (RBGE) has been an official weather station for the Met Office since 1956. The Met Office operates its own weather station at Gogarbank on the city's western outskirts, near Edinburgh Airport. This slightly inland station has a slightly wider temperature span between seasons, is cloudier and somewhat wetter, but differences are minor.", "title": "Geography" }, { "paragraph_id": 47, "text": "Temperature and rainfall records have been kept at the Royal Observatory since 1764.", "title": "Geography" }, { "paragraph_id": 48, "text": "The most recent official population estimates (2020) are 506,520 for the locality (includes Currie), 530,990 for the Edinburgh settlement (includes Musselburgh).", "title": "Demography" }, { "paragraph_id": 49, "text": "Edinburgh has a high proportion of young adults, with 19.5% of the population in their 20s (exceeded only by Aberdeen) and 15.2% in their 30s which is the highest in Scotland. The proportion of Edinburgh's population born in the UK fell from 92% to 84% between 2001 and 2011, while the proportion of White Scottish-born fell from 78% to 70%. Of those Edinburgh residents born in the UK, 335,000 or 83% were born in Scotland, with 58,000 or 14% being born in England.", "title": "Demography" }, { "paragraph_id": 50, "text": "Some 13,000 people or 2.7% of the city's population are of Polish descent. 39,500 people or 8.2% of Edinburgh's population class themselves as Non-White which is an increase from 4% in 2001. Of the Non-White population, the largest group by far are Asian, totalling 26,264 people. Within the Asian population, people of Chinese descent are now the largest sub-group, with 8,076 people, amounting to about 1.7% of the city's total population. The city's population of Indian descent amounts to 6,470 (1.4% of the total population), while there are some 5,858 of Pakistani descent (1.2% of the total population). Although they account for only 1,277 people or 0.3% of the city's population, Edinburgh has the highest number and proportion of people of Bangladeshi descent in Scotland. Over 7,000 people were born in African countries (1.6% of the total population) and nearly 7,000 in the Americas. With the notable exception of Inner London, Edinburgh has a higher number of people born in the United States (over 3,700) than any other city in the UK.", "title": "Demography" }, { "paragraph_id": 51, "text": "The proportion of people born outside the UK was 15.9% compared with 8% in 2001.", "title": "Demography" }, { "paragraph_id": 52, "text": "A census by the Edinburgh presbytery in 1592 recorded a population of 8,003 adults spread equally north and south of the High Street which runs along the spine of the ridge sloping down from the Castle. In the 18th and 19th centuries, the population expanded rapidly, rising from 49,000 in 1751 to 136,000 in 1831, primarily due to migration from rural areas. As the population grew, problems of overcrowding in the Old Town, particularly in the cramped tenements that lined the present day Royal Mile and the Cowgate, were exacerbated. Poor sanitary arrangements resulted in a high incidence of disease, with outbreaks of cholera occurring in 1832, 1848 and 1866.", "title": "Demography" }, { "paragraph_id": 53, "text": "The construction of the New Town from 1767 onwards witnessed the migration of the professional and business classes from the difficult living conditions in the Old Town to the lower density, higher quality surroundings taking shape on land to the north. Expansion southwards from the Old Town saw more tenements being built in the 19th century, giving rise to Victorian suburbs such as Dalry, Newington, Marchmont and Bruntsfield.", "title": "Demography" }, { "paragraph_id": 54, "text": "Early 20th-century population growth coincided with lower-density suburban development. As the city expanded to the south and west, detached and semi-detached villas with large gardens replaced tenements as the predominant building style. Nonetheless, the 2001 census revealed that over 55% of Edinburgh's population were still living in tenements or blocks of flats, a figure in line with other Scottish cities, but much higher than other British cities, and even central London.", "title": "Demography" }, { "paragraph_id": 55, "text": "From the early to mid 20th century, the growth in population, together with slum clearance in the Old Town and other areas, such as Dumbiedykes, Leith, and Fountainbridge, led to the creation of new estates such as Stenhouse and Saughton, Craigmillar and Niddrie, Pilton and Muirhouse, Piershill, and Sighthill.", "title": "Demography" }, { "paragraph_id": 56, "text": "In 2018, the Church of Scotland had 20,956 members in 71 congregations in the Presbytery of Edinburgh. Its most prominent church is St Giles' on the Royal Mile, first dedicated in 1243 but believed to date from before the 12th century. Saint Giles is historically the patron saint of Edinburgh. St Cuthbert's, situated at the west end of Princes Street Gardens in the shadow of Edinburgh Castle and St Giles' can lay claim to being the oldest Christian sites in the city, though the present St Cuthbert's, designed by Hippolyte Blanc, was dedicated in 1894.", "title": "Demography" }, { "paragraph_id": 57, "text": "Other Church of Scotland churches include Greyfriars Kirk, the Canongate Kirk, St Andrew's and St George's West Church and the Barclay Church. The Church of Scotland Offices are in Edinburgh, as is the Assembly Hall where the annual General Assembly is held.", "title": "Demography" }, { "paragraph_id": 58, "text": "The Roman Catholic Archdiocese of St Andrews and Edinburgh has 27 parishes across the city. The Archbishop of St Andrews and Edinburgh has his official residence in Greenhill, the diocesan offices are in nearby Marchmont, and its cathedral is St Mary's Cathedral, Edinburgh. The Diocese of Edinburgh of the Scottish Episcopal Church has over 50 churches, half of them in the city. Its centre is the late 19th-century Gothic style St Mary's Cathedral in the West End's Palmerston Place. Orthodox Christianity is represented by Pan, Romanian and Russian Orthodox churches. There are several independent churches in the city, both Catholic and Protestant, including Charlotte Chapel, Carrubbers Christian Centre, Bellevue Chapel and Sacred Heart. There are also churches belonging to Quakers, Christadelphians, Seventh-day Adventists, Church of Christ, Scientist, The Church of Jesus Christ of Latter-day Saints (LDS Church) and Elim Pentecostal Church.", "title": "Demography" }, { "paragraph_id": 59, "text": "Muslims have several places of worship across the city. Edinburgh Central Mosque, the largest Islamic place of worship, is located in Potterrow on the city's Southside, near Bristo Square. Construction was largely financed by a gift from King Fahd of Saudi Arabia and was completed in 1998. There is also an Ahmadiyya Muslim community.", "title": "Demography" }, { "paragraph_id": 60, "text": "The first recorded presence of a Jewish community in Edinburgh dates back to the late 18th century. Edinburgh's Orthodox synagogue, opened in 1932, is in Salisbury Road and can accommodate a congregation of 2000. A Liberal Jewish congregation also meets in the city.", "title": "Demography" }, { "paragraph_id": 61, "text": "A Sikh gurdwara and a Hindu mandir are located in Leith. The city also has a Brahma Kumaris centre in the Polwarth area.", "title": "Demography" }, { "paragraph_id": 62, "text": "The Edinburgh Buddhist Centre, run by the Triratna Buddhist Community, formerly situated in Melville Terrace, now runs sessions at the Healthy Life Centre, Bread Street. Other Buddhist traditions are represented by groups which meet in the capital: the Community of Interbeing (followers of Thich Nhat Hanh), Rigpa, Samye Dzong, Theravadin, Pure Land and Shambala. There is a Sōtō Zen Priory in Portobello and a Theravadin Thai Buddhist Monastery in Slateford Road.", "title": "Demography" }, { "paragraph_id": 63, "text": "Edinburgh is home to a Baháʼí community, and a Theosophical Society meets in Great King Street.", "title": "Demography" }, { "paragraph_id": 64, "text": "Edinburgh has an Inter-Faith Association.", "title": "Demography" }, { "paragraph_id": 65, "text": "Edinburgh has over 39 graveyards and cemeteries, many of which are listed and of historical character, including several former church burial grounds. Examples include Old Calton Burial Ground, Greyfriars Kirkyard and Dean Cemetery.", "title": "Demography" }, { "paragraph_id": 66, "text": "Edinburgh has the strongest economy of any city in the United Kingdom outside London and the highest percentage of professionals in the UK with 43% of the population holding a degree-level or professional qualification. According to the Centre for International Competitiveness, it is the most competitive large city in the United Kingdom. It also has the highest gross value added per employee of any city in the UK outside London, measuring £57,594 in 2010. It was named European Best Large City of the Future for Foreign Direct Investment and Best Large City for Foreign Direct Investment Strategy in the Financial Times fDi magazine awards 2012/13.", "title": "Economy" }, { "paragraph_id": 67, "text": "In the 19th century, Edinburgh's economy was known for banking and insurance, publishing and printing, and brewing and distilling. Today, its economy is based mainly on financial services, scientific research, higher education, and tourism. In March 2010, unemployment in Edinburgh was comparatively low at 3.6%, and it remains consistently below the Scottish average of 4.5%. Edinburgh is the second most visited city by foreign visitors in the UK after London.", "title": "Economy" }, { "paragraph_id": 68, "text": "Banking has been a mainstay of the Edinburgh economy for over 300 years, since the Bank of Scotland was established by an act of the Scottish Parliament in 1695. Today, the financial services industry, with its particularly strong insurance and investment sectors, and underpinned by Edinburgh-based firms such as Scottish Widows and Standard Life Aberdeen, accounts for the city being the UK's second financial centre after London and Europe's fourth in terms of equity assets. The NatWest Group (formerly Royal Bank of Scotland Group) opened new global headquarters at Gogarburn in the west of the city in October 2005. The city is home to the headquarters of Bank of Scotland, Sainsbury's Bank, Tesco Bank, and TSB Bank.", "title": "Economy" }, { "paragraph_id": 69, "text": "Tourism is also an important element in the city's economy. As a World Heritage Site, tourists visit historical sites such as Edinburgh Castle, the Palace of Holyroodhouse and the Old and New Towns. Their numbers are augmented in August each year during the Edinburgh Festivals, which attracts 4.4 million visitors, and generates over £100M for the local economy.", "title": "Economy" }, { "paragraph_id": 70, "text": "As the centre of Scotland's government and legal system, the public sector plays a central role in Edinburgh's economy. Many departments of the Scottish Government are in the city. Other major employers include NHS Scotland and local government administration. When the £1.3bn Edinburgh & South East Scotland City Region Deal was signed in 2018, the region's Gross Value Added (GVA) contribution to the Scottish economy was cited as £33bn, or 33% of the country's output. The City Region Deal funds a range of \"Data Driven Innovation\" hubs which are using data to innovate in the region, recognising the region's strengths in technology and data science, the growing importance of the data economy, and the need to tackle the digital skills gap, as a route to social and economic prosperity.", "title": "Economy" }, { "paragraph_id": 71, "text": "The city hosts a series of festivals that run between the end of July and early September each year. The best known of these events are the Edinburgh Festival Fringe, the Edinburgh International Festival, the Edinburgh Military Tattoo, the Edinburgh Art Festival and the Edinburgh International Book Festival.", "title": "Culture" }, { "paragraph_id": 72, "text": "The longest established of these festivals is the Edinburgh International Festival, which was first held in 1947 and consists mainly of a programme of high-profile theatre productions and classical music performances, featuring international directors, conductors, theatre companies and orchestras.", "title": "Culture" }, { "paragraph_id": 73, "text": "This has since been overtaken in size by the Edinburgh Fringe which began as a programme of marginal acts alongside the \"official\" Festival and has become the world's largest performing arts festival. In 2017, nearly 3400 different shows were staged in 300 venues across the city. Comedy has become one of the mainstays of the Fringe, with numerous well-known comedians getting their first 'break' there, often by being chosen to receive the Edinburgh Comedy Award. The Edinburgh Military Tattoo, occupies the Castle Esplanade every night for three weeks each August, with massed pipe bands and military bands drawn from around the world. Performances end with a short fireworks display.", "title": "Culture" }, { "paragraph_id": 74, "text": "As well as the summer festivals, many other festivals are held during the rest of the year, including the Edinburgh International Film Festival and Edinburgh International Science Festival.", "title": "Culture" }, { "paragraph_id": 75, "text": "The summer of 2020 was the first time in its 70-year history that the Edinburgh festival was not run, being cancelled due to the COVID-19 pandemic. This affected many of the tourist-focused businesses in Edinburgh which depend on the various festivals over summer to return an annual profit.", "title": "Culture" }, { "paragraph_id": 76, "text": "The annual Edinburgh Hogmanay celebration was originally an informal street party focused on the Tron Kirk in the Old Town's High Street. Since 1993, it has been officially organised with the focus moved to Princes Street. In 1996, over 300,000 people attended, leading to ticketing of the main street party in later years up to a limit of 100,000 tickets. Hogmanay now covers four days of processions, concerts and fireworks, with the street party beginning on Hogmanay. Alternative tickets are available for entrance into the Princes Street Gardens concert and Cèilidh, where well-known artists perform and ticket holders can participate in traditional Scottish cèilidh dancing. The event attracts thousands of people from all over the world.", "title": "Culture" }, { "paragraph_id": 77, "text": "On the night of 30 April the Beltane Fire Festival takes place on Calton Hill, involving a procession followed by scenes inspired by pagan old spring fertility celebrations. At the beginning of October each year the Dussehra Hindu Festival is also held on Calton Hill.", "title": "Culture" }, { "paragraph_id": 78, "text": "Outside the Festival season, Edinburgh supports several theatres and production companies. The Royal Lyceum Theatre has its own company, while the King's Theatre, Edinburgh Festival Theatre and Edinburgh Playhouse stage large touring shows. The Traverse Theatre presents a more contemporary repertoire. Amateur theatre companies productions are staged at the Bedlam Theatre, Church Hill Theatre and King's Theatre among others.", "title": "Culture" }, { "paragraph_id": 79, "text": "The Usher Hall is Edinburgh's premier venue for classical music, as well as occasional popular music concerts. It was the venue for the Eurovision Song Contest 1972. Other halls staging music and theatre include The Hub, the Assembly Rooms and the Queen's Hall. The Scottish Chamber Orchestra is based in Edinburgh.", "title": "Culture" }, { "paragraph_id": 80, "text": "Edinburgh has one repertory cinema, The Cameo, and formerly, the Edinburgh Filmhouse as well as the independent Dominion Cinema and a range of multiplexes.", "title": "Culture" }, { "paragraph_id": 81, "text": "Edinburgh has a healthy popular music scene. Occasionally large concerts are staged at Murrayfield and Meadowbank, while mid-sized events take place at smaller venues such as 'The Corn Exchange', 'The Liquid Rooms' and 'The Bongo Club'. In 2010, PRS for Music listed Edinburgh among the UK's top ten 'most musical' cities. Several city pubs are well known for their live performances of folk music. They include 'Sandy Bell's' in Forrest Road, 'Captain's Bar' in South College Street and 'Whistlebinkies' in South Bridge.", "title": "Culture" }, { "paragraph_id": 82, "text": "Like many other cities in the UK, numerous nightclub venues host Electronic dance music events.", "title": "Culture" }, { "paragraph_id": 83, "text": "Edinburgh is home to a flourishing group of contemporary composers such as Nigel Osborne, Peter Nelson, Lyell Cresswell, Hafliði Hallgrímsson, Edward Harper, Robert Crawford, Robert Dow and John McLeod. McLeod's music is heard regularly on BBC Radio 3 and throughout the UK.", "title": "Culture" }, { "paragraph_id": 84, "text": "The main local newspaper is the Edinburgh Evening News. It is owned and published alongside its sister titles The Scotsman and Scotland on Sunday by JPIMedia.", "title": "Culture" }, { "paragraph_id": 85, "text": "The city has two commercial radio stations: Forth 1, a station which broadcasts mainstream chart music, and Forth 2 on medium wave which plays classic hits. Capital Scotland, Heart Scotland and Eklipse Sports Radio also have transmitters covering Edinburgh. Along with the UK national radio stations, BBC Radio Scotland and the Gaelic language service BBC Radio nan Gàidheal are also broadcast. DAB digital radio is broadcast over two local multiplexes. BFBS Radio broadcasts from studios on the base at Dreghorn Barracks across the city on 98.5FM as part of its UK Bases network. Small scale DAB started October 2022 with numerous community stations onboard", "title": "Culture" }, { "paragraph_id": 86, "text": "Television, along with most radio services, is broadcast to the city from the Craigkelly transmitting station situated in Fife on the opposite side of the Firth of Forth and the Black Hill transmitting station in North Lanarkshire to the west.", "title": "Culture" }, { "paragraph_id": 87, "text": "There are no television stations based in the city. Edinburgh Television existed in the late 1990s to early 2003 and STV Edinburgh existed from 2015 to 2018.", "title": "Culture" }, { "paragraph_id": 88, "text": "Edinburgh has many museums and libraries. These include the National Museum of Scotland, the National Library of Scotland, National War Museum, the Museum of Edinburgh, Surgeons' Hall Museum, the Writers' Museum, the Museum of Childhood and Dynamic Earth. The Museum on The Mound has exhibits on money and banking.", "title": "Culture" }, { "paragraph_id": 89, "text": "Edinburgh Zoo, covering 82 acres (33 ha) on Corstorphine Hill, is the second most visited paid tourist attraction in Scotland, and home to two giant pandas, Tian Tian and Yang Guang, on loan from the People's Republic of China.", "title": "Culture" }, { "paragraph_id": 90, "text": "Edinburgh is also home to The Royal Yacht Britannia, decommissioned in 1997 and now a five-star visitor attraction and evening events venue permanently berthed at Ocean Terminal.", "title": "Culture" }, { "paragraph_id": 91, "text": "Edinburgh contains Scotland's three National Galleries of Art as well as numerous smaller art galleries. The national collection is housed in the Scottish National Gallery, located on The Mound, comprising the linked National Gallery of Scotland building and the Royal Scottish Academy building. Contemporary collections are shown in the Scottish National Gallery of Modern Art which occupies a split site at Belford. The Scottish National Portrait Gallery on Queen Street focuses on portraits and photography.", "title": "Culture" }, { "paragraph_id": 92, "text": "The council-owned City Art Centre in Market Street mounts regular art exhibitions. Across the road, The Fruitmarket Gallery offers world-class exhibitions of contemporary art, featuring work by British and international artists with both emerging and established international reputations.", "title": "Culture" }, { "paragraph_id": 93, "text": "The city hosts several of Scotland's galleries and organisations dedicated to contemporary visual art. Significant strands of this infrastructure include Creative Scotland, Edinburgh College of Art, Talbot Rice Gallery (University of Edinburgh), Collective Gallery (based at the City Observatory) and the Edinburgh Annuale.", "title": "Culture" }, { "paragraph_id": 94, "text": "There are also many small private shops/galleries that provide space to showcase works from local artists.", "title": "Culture" }, { "paragraph_id": 95, "text": "The locale around Princes Street is the main shopping area in the city centre, with souvenir shops, chain stores such as Boots the Chemist, Edinburgh Woollen Mill, and H&M. George Street, north of Princes Street, has several upmarket shops and independent stores. At the east end of Princes Street, the redeveloped St James Quarter opened its doors in June 2021, while next to the Balmoral Hotel and Waverley Station is Waverley Market. Multrees Walk is a pedestrian shopping district, dominated by the presence of Harvey Nichols, and other names including Louis Vuitton, Mulberry and Michael Kors.", "title": "Culture" }, { "paragraph_id": 96, "text": "Edinburgh also has substantial retail parks outside the city centre. These include The Gyle Shopping Centre and Hermiston Gait in the west of the city, Cameron Toll Shopping Centre, Straiton Retail Park (actually just outside the city, in Midlothian) and Fort Kinnaird in the south and east, and Ocean Terminal in the north on the Leith waterfront.", "title": "Culture" }, { "paragraph_id": 97, "text": "Following local government reorganisation in 1996, the City of Edinburgh Council constitutes one of the 32 council areas of Scotland. Like all other local authorities of Scotland, the council has powers over most matters of local administration such as housing, planning, local transport, parks, economic development and regeneration. The council comprises 63 elected councillors, returned from 17 multi-member electoral wards in the city.", "title": "Governance" }, { "paragraph_id": 98, "text": "Following the 2007 City of Edinburgh Council election the incumbent Labour Party lost majority control of the council after 23 years to a Liberal Democrat/SNP coalition.", "title": "Governance" }, { "paragraph_id": 99, "text": "After the 2017 election, the SNP and Labour formed a coalition administration, which lasted until the next election in 2022.", "title": "Governance" }, { "paragraph_id": 100, "text": "The 2022 City of Edinburgh Council election resulted in the most politically balanced council in the UK, with 19 SNP, 13 Labour, 12 Liberal Democrat, 10 Green and 9 Conservative councillors. A minority Labour administration was formed, being voted in by Scottish Conservative and Scottish Liberal Democrat councillors. The SNP and Greens presented a coalition agreement, but could not command majority support in the Council. This caused controversy amongst the Scottish Labour Party group for forming an administration supported by Conservatives and led to the suspension of two Labour councillors on the Council for abstaining on the vote to approve the new administration.", "title": "Governance" }, { "paragraph_id": 101, "text": "The city's coat of arms was registered by the Lord Lyon King of Arms in 1732.", "title": "Governance" }, { "paragraph_id": 102, "text": "Edinburgh, like all of Scotland, is represented in the Scottish Parliament, situated in the Holyrood area of the city. For electoral purposes, the city is divided into six constituencies which, along with 3 seats outside of the city, form part of the Lothian region. Each constituency elects one Member of the Scottish Parliament (MSP) by the first past the post system of election, and the region elects seven additional MSPs to produce a result based on a form of proportional representation.", "title": "Governance" }, { "paragraph_id": 103, "text": "As of the 2021 election, the Scottish National Party have four MSPs: Ash Denham for Edinburgh Eastern, Ben Macpherson for Edinburgh Northern and Leith and Gordon MacDonald for Edinburgh Pentlands and Angus Robertson for Edinburgh Central constituencies. Alex Cole-Hamilton, the Leader of the Scottish Liberal Democrats represents Edinburgh Western and Daniel Johnson of the Scottish Labour Party represents Edinburgh Southern constituency.", "title": "Governance" }, { "paragraph_id": 104, "text": "In addition, the city is also represented by seven regional MSPs representing the Lothian electoral region: The Conservatives have three regional MSPs: Jeremy Balfour, Miles Briggs and Sue Webber, Labour have two regional MSPs: Sarah Boyack and Foysol Choudhury; two Scottish Green regional MSPs were elected: Green's Co-Leader Lorna Slater and Alison Johnstone. However, following her election as the Presiding Officer of the 6th Session of the Scottish Parliament on 13 May 2021, Alison Johnstone has abided by the established parliamentary convention for speakers and renounced all affiliation with her former political party for the duration of her term as Presiding Officer. So she presently sits as an independent MSP for the Lothians Region.", "title": "Governance" }, { "paragraph_id": 105, "text": "Edinburgh is also represented in the House of Commons of the United Kingdom by five Members of Parliament. The city is divided into Edinburgh North and Leith, Edinburgh East, Edinburgh South, Edinburgh South West, and Edinburgh West, each constituency electing one member by the first past the post system.", "title": "Governance" }, { "paragraph_id": 106, "text": "Since the 2019 UK General election, Edinburgh is represented by three Scottish National Party MPs (Deirdre Brock, Edinburgh North and Leith/Tommy Sheppard, Edinburgh East/Joanna Cherry, Edinburgh South West), one Liberal Democrat MP in Edinburgh West (Christine Jardine) and one Labour MP in Edinburgh South (Ian Murray).", "title": "Governance" }, { "paragraph_id": 107, "text": "Edinburgh Airport is Scotland's busiest airport and the principal international gateway to the capital, handling over 14.7 million passengers; it was also the sixth-busiest airport in the United Kingdom by total passengers in 2019. In anticipation of rising passenger numbers, the former operator of the airport BAA outlined a draft masterplan in 2011 to provide for the expansion of the airfield and the terminal building. In June 2012, Global Infrastructure Partners purchased the airport for £807 million. The possibility of building a second runway to cope with an increased number of aircraft movements has also been mooted.", "title": "Transport" }, { "paragraph_id": 108, "text": "Travel in Edinburgh is undertaken predominantly by bus. Lothian Buses, the successor company to Edinburgh Corporation Transport Department, operate the majority of city bus services within the city and to surrounding suburbs, with the most routes running via Princes Street. Services further afield operate from the Edinburgh Bus Station off St Andrew Square and Waterloo Place and are operated mainly by Stagecoach East Scotland, Scottish Citylink, National Express Coaches and Borders Buses.", "title": "Transport" }, { "paragraph_id": 109, "text": "Lothian Buses and McGill's Scotland East operate the city's branded public tour buses. The night bus service and airport buses are mainly operated by Lothian Buses link. In 2019, Lothian Buses recorded 124.2 million passenger journeys.", "title": "Transport" }, { "paragraph_id": 110, "text": "To tackle traffic congestion, Edinburgh is now served by six park & ride sites on the periphery of the city at Sheriffhall (in Midlothian), Ingliston, Riccarton, Inverkeithing (in Fife), Newcraighall and Straiton (in Midlothian). A referendum of Edinburgh residents in February 2005 rejected a proposal to introduce congestion charging in the city.", "title": "Transport" }, { "paragraph_id": 111, "text": "Edinburgh Waverley is the second-busiest railway station in Scotland, with only Glasgow Central handling more passengers. On the evidence of passenger entries and exits between April 2015 and March 2016, Edinburgh Waverley is the fifth-busiest station outside London; it is also the UK's second biggest station in terms of the number of platforms and area size. Waverley is the terminus for most trains arriving from London King's Cross and the departure point for many rail services within Scotland operated by ScotRail.", "title": "Transport" }, { "paragraph_id": 112, "text": "To the west of the city centre lies Haymarket station, which is an important commuter stop. Opened in 2003, Edinburgh Park station serves the Gyle business park in the west of the city and the nearby Gogarburn headquarters of the Royal Bank of Scotland. The Edinburgh Crossrail route connects Edinburgh Park with Haymarket, Edinburgh Waverley and the suburban stations of Brunstane and Newcraighall in the east of the city. There are also commuter lines to Edinburgh Gateway, South Gyle and Dalmeny, the latter serving South Queensferry by the Forth Bridges, and to Wester Hailes and Curriehill in the south-west of the city.", "title": "Transport" }, { "paragraph_id": 113, "text": "Edinburgh Trams became operational on 31 May 2014. The city had been without a tram system since Edinburgh Corporation Tramways ceased on 16 November 1956. Following parliamentary approval in 2007, construction began in early 2008. The first stage of the project was expected to be completed by July 2011 but, following delays caused by extra utility work and a long-running contractual dispute between the council and the main contractor, Bilfinger SE, the project was rescheduled. The line opened in 2014 but had been cut short to 8.7 mi (14.0 km) in length, running from Edinburgh Airport To York Place in the east end of the city.", "title": "Transport" }, { "paragraph_id": 114, "text": "The line was later extended north onto Leith and Newhaven opening a further eight stops to passengers in June 2023. The York Place stop was replaced by a new island stop at Picardy Place.", "title": "Transport" }, { "paragraph_id": 115, "text": "The original plan would have seen a second line run from Haymarket through Ravelston and Craigleith to Granton Square on the Waterfront Edinburgh.This was shelved in 2011 but is now once again under consideration, as is another line potentially linking the south of the city and the Bioquarter.", "title": "Transport" }, { "paragraph_id": 116, "text": "There were also long-term plans for lines running west from the airport to Ratho and Newbridge and another connecting Granton to Newhaven via Lower Granton Road", "title": "Transport" }, { "paragraph_id": 117, "text": "Lothian Buses and Edinburgh Trams are both owned and operated by Transport for Edinburgh.", "title": "Transport" }, { "paragraph_id": 118, "text": "Despite its modern transport links, in January of 2021 Edinburgh was named the most congested city in the UK for the fourth year running, though has since fallen to 7th place in 2022", "title": "Transport" }, { "paragraph_id": 119, "text": "There are three universities in Edinburgh: the University of Edinburgh, Heriot-Watt University and Edinburgh Napier University.", "title": "Education" }, { "paragraph_id": 120, "text": "Established by royal charter in 1583, the University of Edinburgh is one of Scotland's ancient universities and is the fourth oldest in the country after St Andrews, Glasgow and Aberdeen. Originally centred on Old College the university expanded to premises on The Mound, the Royal Mile and George Square. Today, the King's Buildings in the south of the city contain most of the schools within the College of Science and Engineering. In 2002, the medical school moved to purpose built accommodation adjacent to the new Royal Infirmary of Edinburgh at Little France. The university is placed 16th in the QS World University Rankings for 2022.", "title": "Education" }, { "paragraph_id": 121, "text": "Heriot-Watt University is based at the Riccarton campus in the west of Edinburgh. Originally established in 1821, as the world's first mechanics' institute, it was granted university status by royal charter in 1966. It has other campuses in the Scottish Borders, Orkney, United Arab Emirates and Putrajaya in Malaysia. It takes the name Heriot-Watt from Scottish inventor James Watt and Scottish philanthropist and goldsmith George Heriot. Heriot-Watt University has been named International University of the Year by The Times and Sunday Times Good University Guide 2018. In the latest Research Excellence Framework, it was ranked overall in the Top 25% of UK universities and 1st in Scotland for research impact.", "title": "Education" }, { "paragraph_id": 122, "text": "Edinburgh Napier University was originally founded as the Napier College, which was renamed Napier Polytechnic in 1986 and gained university status in 1992. Edinburgh Napier University has campuses in the south and west of the city, including the former Merchiston Tower and Craiglockhart Hydropathic. It is home to the Screen Academy Scotland.", "title": "Education" }, { "paragraph_id": 123, "text": "Queen Margaret University was located in Edinburgh before it moved to a new campus just outside the city boundary on the edge of Musselburgh in 2008.", "title": "Education" }, { "paragraph_id": 124, "text": "Until 2012, further education colleges in the city included Jewel and Esk College (incorporating Leith Nautical College founded in 1903), Telford College, opened in 1968, and Stevenson College, opened in 1970. These have now been amalgamated to form Edinburgh College. Scotland's Rural College also has a campus in south Edinburgh. Other institutions include the Royal College of Surgeons of Edinburgh and the Royal College of Physicians of Edinburgh which were established by royal charter in 1506 and 1681 respectively. The Trustees Drawing Academy of Edinburgh, founded in 1760, became the Edinburgh College of Art in 1907.", "title": "Education" }, { "paragraph_id": 125, "text": "There are 18 nursery, 94 primary and 23 secondary schools administered by the City of Edinburgh Council. Edinburgh is home to The Royal High School, one of the oldest schools in the country and the world. The city also has several independent, fee-paying schools including Edinburgh Academy, Fettes College, George Heriot's School, George Watson's College, Merchiston Castle School, Stewart's Melville College and The Mary Erskine School. In 2009, the proportion of pupils attending independent schools was 24.2%, far above the Scottish national average of just over 7% and higher than in any other region of Scotland. In August 2013, the City of Edinburgh Council opened the city's first stand-alone Gaelic primary school, Bun-sgoil Taobh na Pàirce.", "title": "Education" }, { "paragraph_id": 126, "text": "The main NHS Lothian hospitals serving the Edinburgh area are the Royal Infirmary of Edinburgh, which includes the University of Edinburgh Medical School, and the Western General Hospital, which has a large cancer treatment centre and nurse-led Minor Injuries Clinic. The Royal Edinburgh Hospital in Morningside specialises in mental health. The Royal Hospital for Children and Young People, colloquially referred to as the Sick Kids, is a specialist paediatrics hospital.", "title": "Healthcare" }, { "paragraph_id": 127, "text": "There are two private hospitals: Murrayfield Hospital in the west of the city and Shawfair Hospital in the south; both are owned by Spire Healthcare.", "title": "Healthcare" }, { "paragraph_id": 128, "text": "Edinburgh has three football clubs that play in the Scottish Professional Football League (SPFL): Heart of Midlothian, founded in 1874, Hibernian, founded in 1875 and Edinburgh City F.C., founded in 1966.", "title": "Sport" }, { "paragraph_id": 129, "text": "Heart of Midlothian and Hibernian are known locally as \"Hearts\" and \"Hibs\", respectively. Both play in the Scottish Premiership. They are the oldest city rivals in Scotland and the Edinburgh derby is one of the oldest derby matches in world football. Both clubs have won the Scottish league championship four times. Hearts have won the Scottish Cup eight times and the Scottish League Cup four times. Hibs have won the Scottish Cup and the Scottish League Cup three times each. Edinburgh City were promoted to Scottish League Two in the 2015–16 season, becoming the first club to win promotion to the SPFL via the pyramid system playoffs.", "title": "Sport" }, { "paragraph_id": 130, "text": "Edinburgh was also home to four other former Scottish Football League clubs: the original Edinburgh City (founded in 1928), Leith Athletic, Meadowbank Thistle and St Bernard's. Meadowbank Thistle played at Meadowbank Stadium until 1995, when the club moved to Livingston and became Livingston F.C. The Scottish national team has very occasionally played at Easter Road and Tynecastle, although its normal home stadium is Hampden Park in Glasgow. St Bernard's' New Logie Green was used to host the 1896 Scottish Cup Final, the only time the match has been played outside Glasgow.", "title": "Sport" }, { "paragraph_id": 131, "text": "The city also plays host to Lowland Football League clubs Civil Service Strollers, Edinburgh University and Spartans, as well as East of Scotland League clubs Craigroyston, Edinburgh United, Heriot-Watt University, Leith Athletic, Lothian Thistle Hutchison Vale, and Tynecastle.", "title": "Sport" }, { "paragraph_id": 132, "text": "In women's football, Hearts, Hibs and Spartans play in the SWPL 1. Hutchison Vale and Boroughmuir Thistle play in the SWPL 2.", "title": "Sport" }, { "paragraph_id": 133, "text": "The Scotland national rugby union team play at Murrayfield Stadium, and the professional Edinburgh Rugby team play at the nextdoor Edinburgh Rugby Stadium; both are owned by the Scottish Rugby Union and are also used for other events, including music concerts. Murrayfield is the largest capacity stadium in Scotland, seating 67,144 spectators. Edinburgh is also home to Scottish Premiership teams Boroughmuir RFC, Currie RFC, the Edinburgh Academicals, Heriot's Rugby Club and Watsonians RFC.", "title": "Sport" }, { "paragraph_id": 134, "text": "The Edinburgh Academicals ground at Raeburn Place was the location of the world's first international rugby game on 27 March 1871, between Scotland and England.", "title": "Sport" }, { "paragraph_id": 135, "text": "Rugby league is represented by the Edinburgh Eagles who play in the Rugby League Conference Scotland Division. Murrayfield Stadium has hosted the Magic Weekend where all Super League matches are played in the stadium over one weekend.", "title": "Sport" }, { "paragraph_id": 136, "text": "The Scottish cricket team, which represents Scotland internationally, play their home matches at the Grange cricket club.", "title": "Sport" }, { "paragraph_id": 137, "text": "The Edinburgh Capitals are the latest of a succession of ice hockey clubs in the Scottish capital. Previously Edinburgh was represented by the Murrayfield Racers (2018), the original Murrayfield Racers (who folded in 1996) and the Edinburgh Racers. The club play their home games at the Murrayfield Ice Rink and have competed in the eleven-team professional Scottish National League (SNL) since the 2018–19 season.", "title": "Sport" }, { "paragraph_id": 138, "text": "Next door to Murrayfield Ice Rink is a 7-sheeter dedicated curling facility where curling is played from October to March each season.", "title": "Sport" }, { "paragraph_id": 139, "text": "Caledonia Pride are the only women's professional basketball team in Scotland. Established in 2016, the team compete in the UK wide Women's British Basketball League and play their home matches at the Oriam National Performance Centre. Edinburgh also has several men's basketball teams within the Scottish National League. Boroughmuir Blaze, City of Edinburgh Kings and Edinburgh Lions all compete in Division 1 of the National League, and Pleasance B.C. compete in Division 2.", "title": "Sport" }, { "paragraph_id": 140, "text": "The Edinburgh Diamond Devils is a baseball club which won its first Scottish Championship in 1991 as the \"Reivers.\" 1992 saw the team repeat the achievement, becoming the first team to do so in league history. The same year saw the start of their first youth team, the Blue Jays. The club adopted its present name in 1999.", "title": "Sport" }, { "paragraph_id": 141, "text": "Edinburgh has also hosted national and international sports events including the World Student Games, the 1970 British Commonwealth Games, the 1986 Commonwealth Games and the inaugural 2000 Commonwealth Youth Games. For the 1970 Games the city built Olympic standard venues and facilities including Meadowbank Stadium and the Royal Commonwealth Pool. The Pool underwent refurbishment in 2012 and hosted the Diving competition in the 2014 Commonwealth Games which were held in Glasgow.", "title": "Sport" }, { "paragraph_id": 142, "text": "In American football, the Scottish Claymores played WLAF/NFL Europe games at Murrayfield, including their World Bowl 96 victory. From 1995 to 1997 they played all their games there, from 1998 to 2000 they split their home matches between Murrayfield and Glasgow's Hampden Park, then moved to Glasgow full-time, with one final Murrayfield appearance in 2002. The city's most successful non-professional team are the Edinburgh Wolves who play at Meadowbank Stadium.", "title": "Sport" }, { "paragraph_id": 143, "text": "The Edinburgh Marathon has been held annually in the city since 2003 with more than 16,000 runners taking part on each occasion. Its organisers have called it \"the fastest marathon in the UK\" due to the elevation drop of 40 m (130 ft). The city also organises a half-marathon, as well as 10 km (6.2 mi) and 5 km (3.1 mi) races, including a 5 km (3 mi) race on 1 January each year.", "title": "Sport" }, { "paragraph_id": 144, "text": "Edinburgh has a speedway team, the Edinburgh Monarchs, which, since the loss of its stadium in the city, has raced at the Lothian Arena in Armadale, West Lothian. The Monarchs have won the Premier League championship five times in their history, in 2003 and again in 2008, 2010, 2014 and 2015.", "title": "Sport" }, { "paragraph_id": 145, "text": "For basketball, the city has a basketball club, Edinburgh Tigers.", "title": "Sport" }, { "paragraph_id": 146, "text": "Edinburgh has a long literary tradition, which became especially evident during the Scottish Enlightenment. This heritage and the city's lively literary life in the present led to it being declared the first UNESCO City of Literature in 2004. Prominent authors who have lived in Edinburgh include the economist Adam Smith, born in Kirkcaldy and author of The Wealth of Nations, James Boswell, biographer of Samuel Johnson; Sir Walter Scott, creator of the historical novel and author of works such as Rob Roy, Ivanhoe, and Heart of Midlothian; James Hogg, author of The Private Memoirs and Confessions of a Justified Sinner; Robert Louis Stevenson, creator of Treasure Island, Kidnapped, and Strange Case of Dr Jekyll and Mr Hyde; Sir Arthur Conan Doyle, the creator of Sherlock Holmes; Muriel Spark, author of The Prime of Miss Jean Brodie; diarist Janet Harden; Irvine Welsh, author of Trainspotting, whose novels are mostly set in the city and often written in colloquial Scots; Ian Rankin, author of the Inspector Rebus series of crime thrillers, Alexander McCall Smith, author of the No. 1 Ladies' Detective Agency series, and J. K. Rowling, author of Harry Potter, who wrote much of her first book in Edinburgh coffee shops and now lives in the Cramond area of the city.", "title": "People" }, { "paragraph_id": 147, "text": "Scotland has a rich history of science and engineering, with Edinburgh producing a number of leading figures. John Napier, inventor of logarithms, was born in Merchiston Tower and lived and died in the city. His house now forms part of the original campus of Napier University which was named in his honour. He lies buried under St. Cuthbert's Church. James Clerk Maxwell, founder of the modern theory of electromagnetism, was born at 14 India Street (now the home of the James Clerk Maxwell Foundation) and educated at the Edinburgh Academy and the University of Edinburgh, as was the engineer and telephone pioneer Alexander Graham Bell. James Braidwood, who organised Britain's first municipal fire brigade, was also born in the city and began his career there.", "title": "People" }, { "paragraph_id": 148, "text": "Other names connected with the city include physicist Max Born, a principle founder of Quantum mechanics and Nobel laureate; Charles Darwin, the biologist who propounded the theory of natural selection; David Hume, philosopher, economist and historian; James Hutton, regarded as the \"Father of Geology\"; Joseph Black, the chemist who discovered Magnesium and Carbon Dioxide, and one of the founders of Thermodynamics; pioneering medical researchers Joseph Lister and James Young Simpson; chemist and discoverer of the element nitrogen Daniel Rutherford; Colin Maclaurin, mathematician and developer of the Maclaurin series, and Ian Wilmut, the geneticist involved in the cloning of Dolly the sheep just outside Edinburgh, at the Roslin Institute. The stuffed carcass of Dolly the sheep is now on display in the National Museum of Scotland. The latest in a long line of science celebrities associated with the city is theoretical physicist, Nobel laureate and professor emeritus at the University of Edinburgh Peter Higgs, born in Newcastle but resident in Edinburgh for most of his academic career, after whom the Higgs boson particle has been named.", "title": "People" }, { "paragraph_id": 149, "text": "Edinburgh has been the birthplace of actors like Alastair Sim and Sir Sean Connery, known for being the first cinematic James Bond, the comedian and actor Ronnie Corbett, best known as one of The Two Ronnies, and the impressionist Rory Bremner. Famous artists from the city include the portrait painters Sir Henry Raeburn, Sir David Wilkie and Allan Ramsay.", "title": "People" }, { "paragraph_id": 150, "text": "The city has produced or been home to some very successful musicians in recent decades, particularly Ian Anderson, front man of the band Jethro Tull, The Incredible String Band, the folk duo The Corries, Wattie Buchan, lead singer and founding member of punk band The Exploited, Shirley Manson, lead singer of the band Garbage, the Bay City Rollers, The Proclaimers, Boards of Canada and Idlewild.", "title": "People" }, { "paragraph_id": 151, "text": "Edinburgh is the birthplace of former British Prime Minister Tony Blair who attended the city's Fettes College.", "title": "People" }, { "paragraph_id": 152, "text": "Notorious criminals from Edinburgh's past include Deacon Brodie, head of a trades guild and Edinburgh city councillor by day but a burglar by night, who is said to have been the inspiration for Robert Louis Stevenson's story, the Strange Case of Dr Jekyll and Mr Hyde, and murderers Burke and Hare who delivered fresh corpses for dissection to the famous anatomist Robert Knox.", "title": "People" }, { "paragraph_id": 153, "text": "Another well-known Edinburgh resident was Greyfriars Bobby. The small Skye Terrier reputedly kept vigil over his dead master's grave in Greyfriars Kirkyard for 14 years in the 1860s and 1870s, giving rise to a story of canine devotion which plays a part in attracting visitors to the city.", "title": "People" }, { "paragraph_id": 154, "text": "The City of Edinburgh has entered into 14 international twinning arrangements since 1954. Most of the arrangements are styled as Twin Cities but the agreement with Kraków is designated as a Partner City, and the agreement with Kyoto Prefecture is officially styled as a Friendship Link, reflecting its status as the only region to be twinned with Edinburgh.", "title": "International relations" }, { "paragraph_id": 155, "text": "For a list of consulates in Edinburgh, see List of diplomatic missions in Scotland.", "title": "International relations" } ]
Edinburgh is the capital city of Scotland and one of its 32 council areas. The city is located in south-east Scotland, and is bounded to the north by the Firth of Forth estuary and to the south by the Pentland Hills. Edinburgh had a population of 506,520 in mid-2020, making it the second-most populous city in Scotland and the seventh-most populous in the United Kingdom. Recognised as the capital of Scotland since at least the 15th century, Edinburgh is the seat of the Scottish Government, the Scottish Parliament, the highest courts in Scotland, and the Palace of Holyroodhouse, the official residence of the British monarch in Scotland. It is also the annual venue of the General Assembly of the Church of Scotland. The city has long been a centre of education, particularly in the fields of medicine, Scottish law, literature, philosophy, the sciences and engineering. The University of Edinburgh, founded in 1582 and now one of three in the city, is considered one of the best research institutions in the world. It is the second-largest financial centre in the United Kingdom, the fourth largest in Europe, and the thirteenth largest internationally. The city is a cultural centre, and is the home of institutions including the National Museum of Scotland, the National Library of Scotland and the Scottish National Gallery. The city is also known for the Edinburgh International Festival and the Fringe, the latter being the world's largest annual international arts festival. Historic sites in Edinburgh include Edinburgh Castle, the Palace of Holyroodhouse, the churches of St. Giles, Greyfriars and the Canongate, and the extensive Georgian New Town built in the 18th/19th centuries. Edinburgh's Old Town and New Town together are listed as a UNESCO World Heritage Site, which has been managed by Edinburgh World Heritage since 1999. The city's historical and cultural attractions have made it the UK's second-most visited tourist destination, attracting 4.9 million visits, including 2.4 million from overseas in 2018. Edinburgh is governed by the City of Edinburgh Council, a unitary authority. The City of Edinburgh council area had an estimated population of 526,470 in mid-2021, and includes outlying towns and villages which are not part of Edinburgh proper. The city is in the Lothian region and was historically part of the shire of Midlothian.
2001-07-27T00:40:43Z
2023-12-27T22:49:38Z
[ "Template:Scottish council populations", "Template:Citation", "Template:Cite map", "Template:Curlie", "Template:Historical populations", "Template:List of European capitals by region", "Template:IPAc-en", "Template:Flag", "Template:Cite web", "Template:Webarchive", "Template:Scottish settlement populations", "Template:Authority control", "Template:Citation needed", "Template:See also", "Template:Cite EB1911", "Template:IPA-sco", "Template:Lang-gd", "Template:Notelist-ua", "Template:Cite journal", "Template:ISBN", "Template:Wikisource1911Enc", "Template:Weather box", "Template:Clear right", "Template:Cn", "Template:Wikivoyage", "Template:Navboxes", "Template:Cvt", "Template:Convert", "Template:IPA-gd", "Template:Main", "Template:Commons category", "Template:Short description", "Template:Rp", "Template:Cite book", "Template:Cite legislation UK", "Template:Cite magazine", "Template:About", "Template:Flagicon", "Template:Reflist", "Template:Cite encyclopedia", "Template:Cite news", "Template:Canmore", "Template:Use British English", "Template:Use dmy dates", "Template:Wikisource", "Template:Portal", "Template:Clear left", "Template:Notelist", "Template:Wide image", "Template:Scottish locality populations", "Template:Circa", "Template:Infobox settlement", "Template:Lang", "Template:Efn-ua" ]
https://en.wikipedia.org/wiki/Edinburgh
9,603
Ernest Rutherford
Ernest Rutherford, 1st Baron Rutherford of Nelson, OM, PRS, HonFRSE (30 August 1871 – 19 October 1937) was a New Zealand physicist who was a pioneering researcher in both atomic and nuclear physics. Rutherford has been described as "the father of nuclear physics", and "the greatest experimentalist since Michael Faraday". In 1908, he was awarded the Nobel Prize in Chemistry "for his investigations into the disintegration of the elements, and the chemistry of radioactive substances." He was the first Oceanian Nobel laureate, and the first to perform the awarded work in Canada. Rutherford's discoveries include the concept of radioactive half-life, the radioactive element radon, and the differentiation and naming of alpha and beta radiation. Together with Thomas Royds, Rutherford is credited with proving that alpha radiation is composed of helium nuclei. In 1911, he theorized that atoms have their charge concentrated in a very small nucleus. This was done through his discovery and interpretation of Rutherford scattering during the gold foil experiment performed by Hans Geiger and Ernest Marsden, resulting in his conception of the Rutherford model of the atom. In 1917, he performed the first artificially-induced nuclear reaction by conducting experiments where nitrogen nuclei were bombarded with alpha particles. As a result, he discovered the emission of a subatomic particle which he initially called the "hydrogen atom", but later (more accurately) named the proton. He is also credited with developing the atomic numbering system alongside Henry Moseley. His other achievements include advancing the fields of radio communications and ultrasound technology. Rutherford became Director of the Cavendish Laboratory at the University of Cambridge in 1919. Under his leadership, the neutron was discovered by James Chadwick in 1932. In the same year, the first controlled experiment to split the nucleus was performed by John Cockcroft and Ernest Walton, working under his direction. In honour of his scientific advancements, Rutherford was recognized as a Baron in the peerages of New Zealand and Britain. After his death in 1937, he was buried in Westminster Abbey near Charles Darwin and Isaac Newton. The chemical element rutherfordium (104Rf) was named after him in 1997. Ernest Rutherford was born on 30 August 1871 in Brightwater, a town near Nelson, New Zealand. He was the fourth of twelve children of James Rutherford, an immigrant farmer and mechanic from Perth, Scotland, and his wife Martha Thompson, a schoolteacher from Hornchurch, England. Rutherford's birth certificate was mistakenly written as 'Earnest'. He was known by his family as Ern. When Rutherford was five he moved to Foxhill and attended Foxhill School. At age 11 in 1883, his father moved the Rutherford family moved to Havelock, a town in the Marlborough Sounds. The move was made to be closer to the a flax mill the father was operating near the Ruapaka Stream. Ernest studied at Havelock School. In 1887, on his second attempt, he won a scholarship to study at Nelson College. On his first examination attempt, he received 75 out of 130 marks for geography, 76 out of 130 for history, 101 out of 140 for English, and 200 out of 200 for arithmetic, totalling 452 out of 600 marks. With these marks, he had the highest of anyone from Nelson. When he was awarded the scholarship, he had received 580 out of 600 possible marks. After being awarded the scholarship, Havelock School presented him with a five-volume set of books titled The Peoples of the World. He studied at Nelson College between 1887 and 1889, and was head boy in 1889. He also played in the school's rugby team. He was offered a cadetship in government service, but he declined as he still had 15 months of college remaining. In 1889, after his second attempt, he won a scholarship to study at Canterbury College, University of New Zealand, between 1890 and 1894. He participated in its debating society and the Science Society. At Canterbury, he was awarded a complex BA in Latin, English, and Maths in 1892, a MA in Mathematics and Physical Science in 1893, and a BSc in Chemistry and Geology in 1894. Thereafter, he invented a new form of radio receiver, and in 1895 Rutherford was awarded an 1851 Research Fellowship from the Royal Commission for the Exhibition of 1851, to travel to England for postgraduate study at the Cavendish Laboratory, University of Cambridge. In 1897, he was awarded a BA Research Degree and the Coutts-Trotter Studentship from Trinity College, Cambridge. When Rutherford began his studies at Cambridge, he was among the first 'aliens' (those without a Cambridge degree) allowed to do research at the university, and was additionally honoured to study under J. J. Thomson. With Thomson's encouragement, Rutherford detected radio waves at 0.5 miles (800 m), and briefly held the world record for the distance over which electromagnetic waves could be detected, although when he presented his results at the British Association meeting in 1896, he discovered he had been outdone by Guglielmo Marconi, whose radio waves had sent a message across nearly 10 miles (16 km). Again under Thomson's leadership, Rutherford worked on the conductive effects of X-rays on gases, which led to the discovery of the electron, the results first presented by Thomson in 1897. Hearing of Henri Becquerel's experience with uranium, Rutherford started to explore its radioactivity, discovering two types that differed from X-rays in their penetrating power. Continuing his research in Canada, in 1899 he coined the terms "alpha ray" and "beta ray" to describe these two distinct types of radiation. In 1898, Rutherford was accepted to the chair of Macdonald Professor of physics position at McGill University in Montreal, Canada, on Thomson's recommendation. From 1900 to 1903, he was joined at McGill by the young chemist Frederick Soddy (Nobel Prize in Chemistry, 1921) for whom he set the problem of identifying the noble gas emitted by the radioactive element thorium, a substance which was itself radioactive and would coat other substances. Once he had eliminated all the normal chemical reactions, Soddy suggested that it must be one of the inert gases, which they named thoron. This substance was later found to be Rn, an isotope of radon. They also found another substance they called Thorium X, later identified as Rn, and continued to find traces of helium. They also worked with samples of "Uranium X" (protactinium), from William Crookes, and radium, from Marie Curie. Rutherford further investigated thoron in conjunction with R.B. Owens and found that a sample of radioactive material of any size invariably took the same amount of time for half the sample to decay (in this case, 111⁄2 minutes), a phenomenon for which he coined the term "half-life". Rutherford and Soddy published their paper "Law of Radioactive Change" to account for all their experiments. Until then, atoms were assumed to be the indestructible basis of all matter; and although Curie had suggested that radioactivity was an atomic phenomenon, the idea of the atoms of radioactive substances breaking up was a radically new idea. Rutherford and Soddy demonstrated that radioactivity involved the spontaneous disintegration of atoms into other, as yet, unidentified matter. In 1903, Rutherford considered a type of radiation, discovered (but not named) by French chemist Paul Villard in 1900, as an emission from radium, and realised that this observation must represent something different from his own alpha and beta rays, due to its very much greater penetrating power. Rutherford therefore gave this third type of radiation the name of gamma ray. All three of Rutherford's terms are in standard use today – other types of radioactive decay have since been discovered, but Rutherford's three types are among the most common. In 1904, Rutherford suggested that radioactivity provides a source of energy sufficient to explain the existence of the Sun for the many millions of years required for the slow biological evolution on Earth proposed by biologists such as Charles Darwin. The physicist Lord Kelvin had argued earlier for a much younger Earth, based on the insufficiency of known energy sources, but Rutherford pointed out, at a lecture attended by Kelvin, that radioactivity could solve this problem. Later that year, he was elected as a member to the American Philosophical Society, and in 1907 he returned to Britain to take the chair of physics at the Victoria University of Manchester. In Manchester, Rutherford continued his work with alpha radiation. In conjunction with Hans Geiger, he developed zinc sulfide scintillation screens and ionisation chambers to count alpha particles. By dividing the total charge they produced by the number counted, Rutherford decided that the charge on the alpha particle was two. In late 1907, Ernest Rutherford and Thomas Royds allowed alphas to penetrate a very thin window into an evacuated tube. As they sparked the tube into discharge, the spectrum obtained from it changed, as the alphas accumulated in the tube. Eventually, the clear spectrum of helium gas appeared, proving that alphas were at least ionised helium atoms, and probably helium nuclei. Ernest Rutherford was awarded the 1908 Nobel Prize in Chemistry "for his investigations into the disintegration of the elements, and the chemistry of radioactive substances". Rutherford continued to make ground-breaking discoveries long after receiving the Nobel prize in 1908. Along with Hans Geiger and Ernest Marsden in 1909, he carried out the Geiger–Marsden experiment, which demonstrated the nuclear nature of atoms by measuring the deflection of alpha particles passing through a thin gold foil. Rutherford was inspired to ask Geiger and Marsden in this experiment to look for alpha particles with very high deflection angles, which was not expected according to any theory of matter at that time. Such deflection angles, although rare, were found. It was Rutherford's interpretation of this data that led him to formulate the Rutherford model of the atom in 1911 – that a very small charged nucleus, containing much of the atom's mass, was orbited by low-mass electrons. In 1912, Rutherford was joined by Niels Bohr (who postulated that electrons moved in specific orbits). Bohr adapted Rutherford's nuclear structure to be consistent with Max Planck's quantum theory, and the resulting Rutherford–Bohr model is considered valid to this day. During World War I, Rutherford worked on a top-secret project to solve the practical problems of submarine detection. Both Rutherford and Paul Langevin suggested the use of piezoelectricity, and Rutherford successfully developed a device which measured its output. The use of piezoelectricity then became essential to the development of ultrasound as it is known today. The claim that Rutherford developed sonar, however, is a misconception, as subaquatic detection technologies utilize Langevin's transducer. Together with H.G. Moseley, Rutherford developed the atomic numbering system in 1913. Rutherford and Moseley's experiments used cathode rays to bombard various elements with streams of electrons and observed that each element responded in a consistent and distinct manner. Their research was the first to assert that each element could be defined by the properties of its inner structures – an observation that later led to the discovery of the atomic nucleus. This research led Rutherford to theorize that the hydrogen atom (at the time the least massive entity known to bear a positive charge) was a sort of "positive electron" – a component of every atomic element. It was not until 1919 that Rutherford expanded upon his theory of the "positive electron" with a series of experiments beginning shortly before the end of his time at Manchester. He found that nitrogen, and other light elements, ejected a proton, which he called a "hydrogen atom", when hit with α (alpha) particles. In particular, he showed that particles ejected by alpha particles colliding with hydrogen have unit charge and 1/4 the momentum of alpha particles. Rutherford returned to the Cavendish Laboratory in 1919, succeeding J. J. Thomson as the Cavendish professor and the laboratory's director, posts that he held until his death in 1937. During his tenure, Nobel prizes were awarded to James Chadwick for discovering the neutron (in 1932), John Cockcroft and Ernest Walton for an experiment that was to be known as splitting the atom using a particle accelerator, and Edward Appleton for demonstrating the existence of the ionosphere. In 1919–1920, Rutherford continued his research on the "hydrogen atom" to confirm that alpha particles break down nitrogen nuclei and to affirm the nature of the products. This result showed Rutherford that hydrogen nuclei were a part of nitrogen nuclei (and by inference, probably other nuclei as well). Such a construction had been suspected for many years, on the basis of atomic weights that were integral multiples of that of hydrogen; see Prout's hypothesis. Hydrogen was known to be the lightest element, and its nuclei presumably the lightest nuclei. Now, because of all these considerations, Rutherford decided that a hydrogen nucleus was possibly a fundamental building block of all nuclei, and also possibly a new fundamental particle as well, since nothing was known to be lighter than that nucleus. Thus, confirming and extending the work of Wilhelm Wien, who in 1898 discovered the proton in streams of ionized gas, in 1920 Rutherford postulated the hydrogen nucleus to be a new particle, which he dubbed the proton. In 1921, while working with Niels Bohr, Rutherford theorized about the existence of neutrons, (which he had christened in his 1920 Bakerian Lecture), which could somehow compensate for the repelling effect of the positive charges of protons by causing an attractive nuclear force and thus keep the nuclei from flying apart, due to the repulsion between protons. The only alternative to neutrons was the existence of "nuclear electrons", which would counteract some of the proton charges in the nucleus, since by then it was known that nuclei had about twice the mass that could be accounted for if they were simply assembled from hydrogen nuclei (protons). But how these nuclear electrons could be trapped in the nucleus, was a mystery. Rutherford is widely quoted as saying, regarding the results of these experiments: "It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you." In 1932, Rutherford's theory of neutrons was proved by his associate James Chadwick, who recognized neutrons immediately when they were produced by other scientists and later himself, in bombarding beryllium with alpha particles. In 1935, Chadwick was awarded the Nobel Prize in Physics for this discovery. From as early as 1948 to at least 2017, there was a long-standing myth that Rutherford was the first scientist to observe and report an artificial transmutation of a stable element into another element: nitrogen into oxygen. It was thought by many people to be one of Rutherford's greatest accomplishments. The New Zealand government even issued a commemorative stamp in the belief that the nitrogen-to-oxygen discovery belonged to Rutherford. Beginning in 2017, many scientific institutions corrected their versions of this history to indicate that the credit for the discovery belongs to Patrick Blackett, who undertook this research at Rutherford's suggestion and with his help and advice. Rutherford did detect the ejected proton in 1919 and interpreted it as evidence for disintegration of the nitrogen nucleus (to lighter nuclei). In 1925, Blackett showed that the actual product is oxygen and identified the true reaction as N + α → O + p. Rutherford therefore recognized "that the nucleus may increase rather than diminish in mass as the result of collisions in which the proton is expelled". Rutherford received significant recognition in his home country of New Zealand. In 1901, he earned a DSc from the University of New Zealand. In 1916, he was awarded the Hector Memorial Medal. In 1925, Rutherford called for the New Zealand Government to support education and research, which led to the formation of the Department of Scientific and Industrial Research (DSIR) in the following year. In 1933, Rutherford was one of the two inaugural recipients of the T. K. Sidey Medal, which was established by the Royal Society of New Zealand as an award for outstanding scientific research. Additionally, Rutherford received a number of awards from the British Crown. He was knighted in 1914. He was appointed to the Order of Merit in the 1925 New Year Honours. Between 1925 and 1930, he served as President of the Royal Society, and later as president of the Academic Assistance Council which helped almost 1,000 university refugees from Germany. In 1931 was raised to the peerage as Baron Rutherford of Nelson, decorating his coat of arms with a kiwi and a Māori warrior. The title became extinct upon his unexpected death in 1937. The young Rutherford made his grandmother a wooden potato masher, which was believed to have been made during the school holidays. It has been held in the collection of the Royal Society since 1888. In 1900, Rutherford married Mary Georgina Newton (1876–1954), to whom he had become engaged before leaving New Zealand, at St Paul's Anglican Church, Papanui in Christchurch. They had one daughter, Eileen Mary (1901–1930), who married the physicist Ralph Fowler. Rutherford's hobbies included golf and motoring. For some time before his death, Rutherford had a small hernia, which he neglected to have fixed, and it became strangulated, rendering him violently ill. Despite an emergency operation in London, he died four days afterwards, at Cambridge on 19 October 1937 at age 66, of what physicians termed "intestinal paralysis". After cremation at Golders Green Crematorium, he was given the high honour of burial in Westminster Abbey, near Isaac Newton and other illustrious British scientists such as Charles Darwin. Rutherford is considered to be among the greatest scientists in history. At the opening session of the 1938 Indian Science Congress, which Rutherford had been expected to preside over before his death, astrophysicist James Jeans spoke in his place and deemed him "one of the greatest scientists of all time", saying: In his flair for the right line of approach to a problem, as well as in the simple directness of his methods of attack, [Rutherford] often reminds us of Faraday, but he had two great advantages which Faraday did not possess, first, exuberant bodily health and energy, and second, the opportunity and capacity to direct a band of enthusiastic co-workers. Great though Faraday's output of work was, it seems to me that to match Rutherford's work in quantity as well as in quality, we must go back to Newton. In some respects he was more fortunate than Newton. Rutherford was ever the happy warrior – happy in his work, happy in its outcome, and happy in its human contacts. Rutherford is known as "the father of nuclear physics" because his research, and work done under him as laboratory director, established the nuclear structure of the atom and the essential nature of radioactive decay as a nuclear process. Patrick Blackett, a research fellow working under Rutherford, using natural alpha particles, demonstrated induced nuclear transmutation. Later, Rutherford's team, using protons from an accelerator, demonstrated artificially-induced nuclear reactions and transmutation. Rutherford died too early to see Leó Szilárd's idea of controlled nuclear chain reactions come into being. However, a speech of Rutherford's about his artificially-induced transmutation in lithium, printed in the 12 September 1933 issue of The Times, was reported by Szilárd to have been his inspiration for thinking of the possibility of a controlled energy-producing nuclear chain reaction. Rutherford's speech touched on the 1932 work of his students John Cockcroft and Ernest Walton in "splitting" lithium into alpha particles by bombardment with protons from a particle accelerator they had constructed. Rutherford realized that the energy released from the split lithium atoms was enormous, but he also realized that the energy needed for the accelerator, and its essential inefficiency in splitting atoms in this fashion, made the project an impossibility as a practical source of energy (accelerator-induced fission of light elements remains too inefficient to be used in this way, even today). Rutherford's speech in part, read: We might in these processes obtain very much more energy than the proton supplied, but on the average we could not expect to obtain energy in this way. It was a very poor and inefficient way of producing energy, and anyone who looked for a source of power in the transformation of the atoms was talking moonshine. But the subject was scientifically interesting because it gave insight into the atoms. The element rutherfordium, Rf, Z=104, was named in honour of Rutherford in 1997.
[ { "paragraph_id": 0, "text": "Ernest Rutherford, 1st Baron Rutherford of Nelson, OM, PRS, HonFRSE (30 August 1871 – 19 October 1937) was a New Zealand physicist who was a pioneering researcher in both atomic and nuclear physics. Rutherford has been described as \"the father of nuclear physics\", and \"the greatest experimentalist since Michael Faraday\". In 1908, he was awarded the Nobel Prize in Chemistry \"for his investigations into the disintegration of the elements, and the chemistry of radioactive substances.\" He was the first Oceanian Nobel laureate, and the first to perform the awarded work in Canada.", "title": "" }, { "paragraph_id": 1, "text": "Rutherford's discoveries include the concept of radioactive half-life, the radioactive element radon, and the differentiation and naming of alpha and beta radiation. Together with Thomas Royds, Rutherford is credited with proving that alpha radiation is composed of helium nuclei. In 1911, he theorized that atoms have their charge concentrated in a very small nucleus. This was done through his discovery and interpretation of Rutherford scattering during the gold foil experiment performed by Hans Geiger and Ernest Marsden, resulting in his conception of the Rutherford model of the atom. In 1917, he performed the first artificially-induced nuclear reaction by conducting experiments where nitrogen nuclei were bombarded with alpha particles. As a result, he discovered the emission of a subatomic particle which he initially called the \"hydrogen atom\", but later (more accurately) named the proton. He is also credited with developing the atomic numbering system alongside Henry Moseley. His other achievements include advancing the fields of radio communications and ultrasound technology.", "title": "" }, { "paragraph_id": 2, "text": "Rutherford became Director of the Cavendish Laboratory at the University of Cambridge in 1919. Under his leadership, the neutron was discovered by James Chadwick in 1932. In the same year, the first controlled experiment to split the nucleus was performed by John Cockcroft and Ernest Walton, working under his direction. In honour of his scientific advancements, Rutherford was recognized as a Baron in the peerages of New Zealand and Britain. After his death in 1937, he was buried in Westminster Abbey near Charles Darwin and Isaac Newton. The chemical element rutherfordium (104Rf) was named after him in 1997.", "title": "" }, { "paragraph_id": 3, "text": "Ernest Rutherford was born on 30 August 1871 in Brightwater, a town near Nelson, New Zealand. He was the fourth of twelve children of James Rutherford, an immigrant farmer and mechanic from Perth, Scotland, and his wife Martha Thompson, a schoolteacher from Hornchurch, England. Rutherford's birth certificate was mistakenly written as 'Earnest'. He was known by his family as Ern.", "title": "Early life and education" }, { "paragraph_id": 4, "text": "When Rutherford was five he moved to Foxhill and attended Foxhill School. At age 11 in 1883, his father moved the Rutherford family moved to Havelock, a town in the Marlborough Sounds. The move was made to be closer to the a flax mill the father was operating near the Ruapaka Stream. Ernest studied at Havelock School.", "title": "Early life and education" }, { "paragraph_id": 5, "text": "In 1887, on his second attempt, he won a scholarship to study at Nelson College. On his first examination attempt, he received 75 out of 130 marks for geography, 76 out of 130 for history, 101 out of 140 for English, and 200 out of 200 for arithmetic, totalling 452 out of 600 marks. With these marks, he had the highest of anyone from Nelson. When he was awarded the scholarship, he had received 580 out of 600 possible marks. After being awarded the scholarship, Havelock School presented him with a five-volume set of books titled The Peoples of the World. He studied at Nelson College between 1887 and 1889, and was head boy in 1889. He also played in the school's rugby team. He was offered a cadetship in government service, but he declined as he still had 15 months of college remaining.", "title": "Early life and education" }, { "paragraph_id": 6, "text": "In 1889, after his second attempt, he won a scholarship to study at Canterbury College, University of New Zealand, between 1890 and 1894. He participated in its debating society and the Science Society. At Canterbury, he was awarded a complex BA in Latin, English, and Maths in 1892, a MA in Mathematics and Physical Science in 1893, and a BSc in Chemistry and Geology in 1894.", "title": "Early life and education" }, { "paragraph_id": 7, "text": "Thereafter, he invented a new form of radio receiver, and in 1895 Rutherford was awarded an 1851 Research Fellowship from the Royal Commission for the Exhibition of 1851, to travel to England for postgraduate study at the Cavendish Laboratory, University of Cambridge. In 1897, he was awarded a BA Research Degree and the Coutts-Trotter Studentship from Trinity College, Cambridge.", "title": "Early life and education" }, { "paragraph_id": 8, "text": "When Rutherford began his studies at Cambridge, he was among the first 'aliens' (those without a Cambridge degree) allowed to do research at the university, and was additionally honoured to study under J. J. Thomson.", "title": "Scientific career" }, { "paragraph_id": 9, "text": "With Thomson's encouragement, Rutherford detected radio waves at 0.5 miles (800 m), and briefly held the world record for the distance over which electromagnetic waves could be detected, although when he presented his results at the British Association meeting in 1896, he discovered he had been outdone by Guglielmo Marconi, whose radio waves had sent a message across nearly 10 miles (16 km).", "title": "Scientific career" }, { "paragraph_id": 10, "text": "Again under Thomson's leadership, Rutherford worked on the conductive effects of X-rays on gases, which led to the discovery of the electron, the results first presented by Thomson in 1897. Hearing of Henri Becquerel's experience with uranium, Rutherford started to explore its radioactivity, discovering two types that differed from X-rays in their penetrating power. Continuing his research in Canada, in 1899 he coined the terms \"alpha ray\" and \"beta ray\" to describe these two distinct types of radiation.", "title": "Scientific career" }, { "paragraph_id": 11, "text": "In 1898, Rutherford was accepted to the chair of Macdonald Professor of physics position at McGill University in Montreal, Canada, on Thomson's recommendation. From 1900 to 1903, he was joined at McGill by the young chemist Frederick Soddy (Nobel Prize in Chemistry, 1921) for whom he set the problem of identifying the noble gas emitted by the radioactive element thorium, a substance which was itself radioactive and would coat other substances. Once he had eliminated all the normal chemical reactions, Soddy suggested that it must be one of the inert gases, which they named thoron. This substance was later found to be Rn, an isotope of radon. They also found another substance they called Thorium X, later identified as Rn, and continued to find traces of helium. They also worked with samples of \"Uranium X\" (protactinium), from William Crookes, and radium, from Marie Curie. Rutherford further investigated thoron in conjunction with R.B. Owens and found that a sample of radioactive material of any size invariably took the same amount of time for half the sample to decay (in this case, 111⁄2 minutes), a phenomenon for which he coined the term \"half-life\". Rutherford and Soddy published their paper \"Law of Radioactive Change\" to account for all their experiments. Until then, atoms were assumed to be the indestructible basis of all matter; and although Curie had suggested that radioactivity was an atomic phenomenon, the idea of the atoms of radioactive substances breaking up was a radically new idea. Rutherford and Soddy demonstrated that radioactivity involved the spontaneous disintegration of atoms into other, as yet, unidentified matter.", "title": "Scientific career" }, { "paragraph_id": 12, "text": "In 1903, Rutherford considered a type of radiation, discovered (but not named) by French chemist Paul Villard in 1900, as an emission from radium, and realised that this observation must represent something different from his own alpha and beta rays, due to its very much greater penetrating power. Rutherford therefore gave this third type of radiation the name of gamma ray. All three of Rutherford's terms are in standard use today – other types of radioactive decay have since been discovered, but Rutherford's three types are among the most common. In 1904, Rutherford suggested that radioactivity provides a source of energy sufficient to explain the existence of the Sun for the many millions of years required for the slow biological evolution on Earth proposed by biologists such as Charles Darwin. The physicist Lord Kelvin had argued earlier for a much younger Earth, based on the insufficiency of known energy sources, but Rutherford pointed out, at a lecture attended by Kelvin, that radioactivity could solve this problem. Later that year, he was elected as a member to the American Philosophical Society, and in 1907 he returned to Britain to take the chair of physics at the Victoria University of Manchester.", "title": "Scientific career" }, { "paragraph_id": 13, "text": "In Manchester, Rutherford continued his work with alpha radiation. In conjunction with Hans Geiger, he developed zinc sulfide scintillation screens and ionisation chambers to count alpha particles. By dividing the total charge they produced by the number counted, Rutherford decided that the charge on the alpha particle was two. In late 1907, Ernest Rutherford and Thomas Royds allowed alphas to penetrate a very thin window into an evacuated tube. As they sparked the tube into discharge, the spectrum obtained from it changed, as the alphas accumulated in the tube. Eventually, the clear spectrum of helium gas appeared, proving that alphas were at least ionised helium atoms, and probably helium nuclei. Ernest Rutherford was awarded the 1908 Nobel Prize in Chemistry \"for his investigations into the disintegration of the elements, and the chemistry of radioactive substances\".", "title": "Scientific career" }, { "paragraph_id": 14, "text": "Rutherford continued to make ground-breaking discoveries long after receiving the Nobel prize in 1908. Along with Hans Geiger and Ernest Marsden in 1909, he carried out the Geiger–Marsden experiment, which demonstrated the nuclear nature of atoms by measuring the deflection of alpha particles passing through a thin gold foil. Rutherford was inspired to ask Geiger and Marsden in this experiment to look for alpha particles with very high deflection angles, which was not expected according to any theory of matter at that time. Such deflection angles, although rare, were found. It was Rutherford's interpretation of this data that led him to formulate the Rutherford model of the atom in 1911 – that a very small charged nucleus, containing much of the atom's mass, was orbited by low-mass electrons.", "title": "Scientific career" }, { "paragraph_id": 15, "text": "In 1912, Rutherford was joined by Niels Bohr (who postulated that electrons moved in specific orbits). Bohr adapted Rutherford's nuclear structure to be consistent with Max Planck's quantum theory, and the resulting Rutherford–Bohr model is considered valid to this day.", "title": "Scientific career" }, { "paragraph_id": 16, "text": "During World War I, Rutherford worked on a top-secret project to solve the practical problems of submarine detection. Both Rutherford and Paul Langevin suggested the use of piezoelectricity, and Rutherford successfully developed a device which measured its output. The use of piezoelectricity then became essential to the development of ultrasound as it is known today. The claim that Rutherford developed sonar, however, is a misconception, as subaquatic detection technologies utilize Langevin's transducer.", "title": "Scientific career" }, { "paragraph_id": 17, "text": "Together with H.G. Moseley, Rutherford developed the atomic numbering system in 1913. Rutherford and Moseley's experiments used cathode rays to bombard various elements with streams of electrons and observed that each element responded in a consistent and distinct manner. Their research was the first to assert that each element could be defined by the properties of its inner structures – an observation that later led to the discovery of the atomic nucleus. This research led Rutherford to theorize that the hydrogen atom (at the time the least massive entity known to bear a positive charge) was a sort of \"positive electron\" – a component of every atomic element.", "title": "Scientific career" }, { "paragraph_id": 18, "text": "It was not until 1919 that Rutherford expanded upon his theory of the \"positive electron\" with a series of experiments beginning shortly before the end of his time at Manchester. He found that nitrogen, and other light elements, ejected a proton, which he called a \"hydrogen atom\", when hit with α (alpha) particles. In particular, he showed that particles ejected by alpha particles colliding with hydrogen have unit charge and 1/4 the momentum of alpha particles.", "title": "Scientific career" }, { "paragraph_id": 19, "text": "Rutherford returned to the Cavendish Laboratory in 1919, succeeding J. J. Thomson as the Cavendish professor and the laboratory's director, posts that he held until his death in 1937. During his tenure, Nobel prizes were awarded to James Chadwick for discovering the neutron (in 1932), John Cockcroft and Ernest Walton for an experiment that was to be known as splitting the atom using a particle accelerator, and Edward Appleton for demonstrating the existence of the ionosphere.", "title": "Scientific career" }, { "paragraph_id": 20, "text": "In 1919–1920, Rutherford continued his research on the \"hydrogen atom\" to confirm that alpha particles break down nitrogen nuclei and to affirm the nature of the products. This result showed Rutherford that hydrogen nuclei were a part of nitrogen nuclei (and by inference, probably other nuclei as well). Such a construction had been suspected for many years, on the basis of atomic weights that were integral multiples of that of hydrogen; see Prout's hypothesis. Hydrogen was known to be the lightest element, and its nuclei presumably the lightest nuclei. Now, because of all these considerations, Rutherford decided that a hydrogen nucleus was possibly a fundamental building block of all nuclei, and also possibly a new fundamental particle as well, since nothing was known to be lighter than that nucleus. Thus, confirming and extending the work of Wilhelm Wien, who in 1898 discovered the proton in streams of ionized gas, in 1920 Rutherford postulated the hydrogen nucleus to be a new particle, which he dubbed the proton.", "title": "Scientific career" }, { "paragraph_id": 21, "text": "In 1921, while working with Niels Bohr, Rutherford theorized about the existence of neutrons, (which he had christened in his 1920 Bakerian Lecture), which could somehow compensate for the repelling effect of the positive charges of protons by causing an attractive nuclear force and thus keep the nuclei from flying apart, due to the repulsion between protons. The only alternative to neutrons was the existence of \"nuclear electrons\", which would counteract some of the proton charges in the nucleus, since by then it was known that nuclei had about twice the mass that could be accounted for if they were simply assembled from hydrogen nuclei (protons). But how these nuclear electrons could be trapped in the nucleus, was a mystery. Rutherford is widely quoted as saying, regarding the results of these experiments: \"It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you.\"", "title": "Scientific career" }, { "paragraph_id": 22, "text": "In 1932, Rutherford's theory of neutrons was proved by his associate James Chadwick, who recognized neutrons immediately when they were produced by other scientists and later himself, in bombarding beryllium with alpha particles. In 1935, Chadwick was awarded the Nobel Prize in Physics for this discovery.", "title": "Scientific career" }, { "paragraph_id": 23, "text": "From as early as 1948 to at least 2017, there was a long-standing myth that Rutherford was the first scientist to observe and report an artificial transmutation of a stable element into another element: nitrogen into oxygen. It was thought by many people to be one of Rutherford's greatest accomplishments. The New Zealand government even issued a commemorative stamp in the belief that the nitrogen-to-oxygen discovery belonged to Rutherford. Beginning in 2017, many scientific institutions corrected their versions of this history to indicate that the credit for the discovery belongs to Patrick Blackett, who undertook this research at Rutherford's suggestion and with his help and advice. Rutherford did detect the ejected proton in 1919 and interpreted it as evidence for disintegration of the nitrogen nucleus (to lighter nuclei). In 1925, Blackett showed that the actual product is oxygen and identified the true reaction as N + α → O + p. Rutherford therefore recognized \"that the nucleus may increase rather than diminish in mass as the result of collisions in which the proton is expelled\".", "title": "Scientific career" }, { "paragraph_id": 24, "text": "Rutherford received significant recognition in his home country of New Zealand. In 1901, he earned a DSc from the University of New Zealand. In 1916, he was awarded the Hector Memorial Medal. In 1925, Rutherford called for the New Zealand Government to support education and research, which led to the formation of the Department of Scientific and Industrial Research (DSIR) in the following year. In 1933, Rutherford was one of the two inaugural recipients of the T. K. Sidey Medal, which was established by the Royal Society of New Zealand as an award for outstanding scientific research.", "title": "Scientific career" }, { "paragraph_id": 25, "text": "Additionally, Rutherford received a number of awards from the British Crown. He was knighted in 1914. He was appointed to the Order of Merit in the 1925 New Year Honours. Between 1925 and 1930, he served as President of the Royal Society, and later as president of the Academic Assistance Council which helped almost 1,000 university refugees from Germany. In 1931 was raised to the peerage as Baron Rutherford of Nelson, decorating his coat of arms with a kiwi and a Māori warrior. The title became extinct upon his unexpected death in 1937.", "title": "Scientific career" }, { "paragraph_id": 26, "text": "The young Rutherford made his grandmother a wooden potato masher, which was believed to have been made during the school holidays. It has been held in the collection of the Royal Society since 1888.", "title": "Personal life and death" }, { "paragraph_id": 27, "text": "In 1900, Rutherford married Mary Georgina Newton (1876–1954), to whom he had become engaged before leaving New Zealand, at St Paul's Anglican Church, Papanui in Christchurch. They had one daughter, Eileen Mary (1901–1930), who married the physicist Ralph Fowler. Rutherford's hobbies included golf and motoring.", "title": "Personal life and death" }, { "paragraph_id": 28, "text": "For some time before his death, Rutherford had a small hernia, which he neglected to have fixed, and it became strangulated, rendering him violently ill. Despite an emergency operation in London, he died four days afterwards, at Cambridge on 19 October 1937 at age 66, of what physicians termed \"intestinal paralysis\". After cremation at Golders Green Crematorium, he was given the high honour of burial in Westminster Abbey, near Isaac Newton and other illustrious British scientists such as Charles Darwin.", "title": "Personal life and death" }, { "paragraph_id": 29, "text": "Rutherford is considered to be among the greatest scientists in history. At the opening session of the 1938 Indian Science Congress, which Rutherford had been expected to preside over before his death, astrophysicist James Jeans spoke in his place and deemed him \"one of the greatest scientists of all time\", saying:", "title": "Legacy" }, { "paragraph_id": 30, "text": "In his flair for the right line of approach to a problem, as well as in the simple directness of his methods of attack, [Rutherford] often reminds us of Faraday, but he had two great advantages which Faraday did not possess, first, exuberant bodily health and energy, and second, the opportunity and capacity to direct a band of enthusiastic co-workers. Great though Faraday's output of work was, it seems to me that to match Rutherford's work in quantity as well as in quality, we must go back to Newton. In some respects he was more fortunate than Newton. Rutherford was ever the happy warrior – happy in his work, happy in its outcome, and happy in its human contacts.", "title": "Legacy" }, { "paragraph_id": 31, "text": "Rutherford is known as \"the father of nuclear physics\" because his research, and work done under him as laboratory director, established the nuclear structure of the atom and the essential nature of radioactive decay as a nuclear process. Patrick Blackett, a research fellow working under Rutherford, using natural alpha particles, demonstrated induced nuclear transmutation. Later, Rutherford's team, using protons from an accelerator, demonstrated artificially-induced nuclear reactions and transmutation.", "title": "Legacy" }, { "paragraph_id": 32, "text": "Rutherford died too early to see Leó Szilárd's idea of controlled nuclear chain reactions come into being. However, a speech of Rutherford's about his artificially-induced transmutation in lithium, printed in the 12 September 1933 issue of The Times, was reported by Szilárd to have been his inspiration for thinking of the possibility of a controlled energy-producing nuclear chain reaction.", "title": "Legacy" }, { "paragraph_id": 33, "text": "Rutherford's speech touched on the 1932 work of his students John Cockcroft and Ernest Walton in \"splitting\" lithium into alpha particles by bombardment with protons from a particle accelerator they had constructed. Rutherford realized that the energy released from the split lithium atoms was enormous, but he also realized that the energy needed for the accelerator, and its essential inefficiency in splitting atoms in this fashion, made the project an impossibility as a practical source of energy (accelerator-induced fission of light elements remains too inefficient to be used in this way, even today). Rutherford's speech in part, read:", "title": "Legacy" }, { "paragraph_id": 34, "text": "We might in these processes obtain very much more energy than the proton supplied, but on the average we could not expect to obtain energy in this way. It was a very poor and inefficient way of producing energy, and anyone who looked for a source of power in the transformation of the atoms was talking moonshine. But the subject was scientifically interesting because it gave insight into the atoms.", "title": "Legacy" }, { "paragraph_id": 35, "text": "The element rutherfordium, Rf, Z=104, was named in honour of Rutherford in 1997.", "title": "Legacy" } ]
Ernest Rutherford, 1st Baron Rutherford of Nelson, was a New Zealand physicist who was a pioneering researcher in both atomic and nuclear physics. Rutherford has been described as "the father of nuclear physics", and "the greatest experimentalist since Michael Faraday". In 1908, he was awarded the Nobel Prize in Chemistry "for his investigations into the disintegration of the elements, and the chemistry of radioactive substances." He was the first Oceanian Nobel laureate, and the first to perform the awarded work in Canada. Rutherford's discoveries include the concept of radioactive half-life, the radioactive element radon, and the differentiation and naming of alpha and beta radiation. Together with Thomas Royds, Rutherford is credited with proving that alpha radiation is composed of helium nuclei. In 1911, he theorized that atoms have their charge concentrated in a very small nucleus. This was done through his discovery and interpretation of Rutherford scattering during the gold foil experiment performed by Hans Geiger and Ernest Marsden, resulting in his conception of the Rutherford model of the atom. In 1917, he performed the first artificially-induced nuclear reaction by conducting experiments where nitrogen nuclei were bombarded with alpha particles. As a result, he discovered the emission of a subatomic particle which he initially called the "hydrogen atom", but later named the proton. He is also credited with developing the atomic numbering system alongside Henry Moseley. His other achievements include advancing the fields of radio communications and ultrasound technology. Rutherford became Director of the Cavendish Laboratory at the University of Cambridge in 1919. Under his leadership, the neutron was discovered by James Chadwick in 1932. In the same year, the first controlled experiment to split the nucleus was performed by John Cockcroft and Ernest Walton, working under his direction. In honour of his scientific advancements, Rutherford was recognized as a Baron in the peerages of New Zealand and Britain. After his death in 1937, he was buried in Westminster Abbey near Charles Darwin and Isaac Newton. The chemical element rutherfordium (104Rf) was named after him in 1997.
2001-09-30T19:33:54Z
2023-12-07T16:19:54Z
[ "Template:S-end", "Template:Pp-semi-indef", "Template:Reflist", "Template:Cite journal", "Template:S-aca", "Template:Cite ODNB", "Template:S-ttl", "Template:S-aft", "Template:Royal Society presidents 1900s", "Template:Use dmy dates", "Template:Convert", "Template:See also", "Template:Cite arXiv", "Template:Authority control", "Template:London Gazette", "Template:1908 Nobel Prize winners", "Template:Redirect-distinguish", "Template:Cite book", "Template:Notelist", "Template:Acad", "Template:Cite news", "Template:Copley Medallists 1901–1950", "Template:Use New Zealand English", "Template:Infobox scientist", "Template:Snd", "Template:Cite web", "Template:Dalton Medallists", "Template:Efn", "Template:ISBN", "Template:DNZB", "Template:External media", "Template:People whose names are used in chemical element names", "Template:Pp-move", "Template:Frac", "Template:Blockquote", "Template:Cite encyclopedia", "Template:Nobel Prize in Chemistry Laureates 1901–1925", "Template:Recipients of the Hector Memorial Medal", "Template:Short description", "Template:Nobelprize", "Template:S-start", "Template:S-bef", "Template:Postnominals", "Template:PM20", "Template:Subject bar" ]
https://en.wikipedia.org/wiki/Ernest_Rutherford
9,604
Many-worlds interpretation
The many-worlds interpretation (MWI) is a philosophical position about how the mathematics used in quantum mechanics relates to physical reality. It asserts that the universal wavefunction is objectively real, and that there is no wave function collapse. This implies that all possible outcomes of quantum measurements are physically realized in some "world" or universe. In contrast to some other interpretations, the evolution of reality as a whole in MWI is rigidly deterministic and local. Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957. Bryce DeWitt popularized the formulation and named it many-worlds in the 1970s. In modern versions of many-worlds, the subjective appearance of wave function collapse is explained by the mechanism of quantum decoherence. Decoherence approaches to interpreting quantum theory have been widely explored and developed since the 1970s. MWI is considered a mainstream interpretation of quantum mechanics, along with the other decoherence interpretations, the Copenhagen interpretation, and hidden variable theories such as Bohmian mechanics. The many-worlds interpretation implies that there are most likely an uncountable number of universes. It is one of a number of multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realized. This is intended to resolve the measurement problem and thus some paradoxes of quantum theory, such as Wigner's friend, the EPR paradox and Schrödinger's cat, since every possible outcome of a quantum event exists in its own universe. The many-worlds interpretation's key idea is that the linear and unitary dynamics of quantum mechanics applies everywhere and at all times and so describes the whole universe. In particular, it models a measurement as a unitary transformation, a correlation-inducing interaction, between observer and object, without using a collapse postulate, and models observers as ordinary quantum-mechanical systems. This stands in sharp contrast to the Copenhagen interpretation, in which a measurement is a "primitive" concept, not describable by unitary quantum mechanics; in Copenhagen the universe is divided into a quantum and a classical domain, and the collapse postulate is central. In MWI there is no division between classical and quantum: everything is quantum and there is no collapse. MWI's main conclusion is that the universe (or multiverse in this context) is composed of a quantum superposition of an uncountable or undefinable amount or number of increasingly divergent, non-communicating parallel universes or quantum worlds. Sometimes dubbed Everett worlds, each is an internally consistent and actualized alternative history or timeline. The many-worlds interpretation uses decoherence to explain the measurement process and the emergence of a quasi-classical world. Wojciech H. Zurek, one of decoherence theory's pioneers, said: "Under scrutiny of the environment, only pointer states remain unchanged. Other states decohere into mixtures of stable pointer states that can persist, and, in this sense, exist: They are einselected." Zurek emphasizes that his work does not depend on a particular interpretation. The many-worlds interpretation shares many similarities with the decoherent histories interpretation, which also uses decoherence to explain the process of measurement or wave function collapse. MWI treats the other histories or worlds as real, since it regards the universal wave function as the "basic physical entity" or "the fundamental entity, obeying at all times a deterministic wave equation". The decoherent histories interpretation, on the other hand, needs only one of the histories (or worlds) to be real. Several authors, including Everett, John Archibald Wheeler and David Deutsch, call many-worlds a theory or metatheory, rather than just an interpretation. Everett argued that it was the "only completely coherent approach to explaining both the contents of quantum mechanics and the appearance of the world." Deutsch dismissed the idea that many-worlds is an "interpretation", saying that to call it an interpretation "is like talking about dinosaurs as an 'interpretation' of fossil records." In his 1957 doctoral dissertation, Everett proposed that, rather than relying on external observation for analysis of isolated quantum systems, one could mathematically model an object, as well as its observers, as purely physical systems within the mathematical framework developed by Paul Dirac, John von Neumann, and others, discarding altogether the ad hoc mechanism of wave function collapse. Everett's original work introduced the concept of a relative state. Two (or more) subsystems, after a general interaction, become correlated, or as is now said, entangled. Everett noted that such entangled systems can be expressed as the sum of products of states, where the two or more subsystems are each in a state relative to each other. After a measurement or observation one of the pair (or triple...) is the measured, object or observed system, and one other member is the measuring apparatus (which may include an observer) having recorded the state of the measured system. Each product of subsystem states in the overall superposition evolves over time independently of other products. Once the subsystems interact, their states have become correlated or entangled and can no longer be considered independent. In Everett's terminology, each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted. In the example of Schrödinger's cat, after the box is opened, the entangled system is the cat, the poison vial and the observer. One relative triple of states would be the alive cat, the unbroken vial and the observer seeing an alive cat. Another relative triple of states would be the dead cat, the broken vial and the observer seeing a dead cat. In the example of a measurement of a continuous variable (e.g. position q) the object-observer system decomposes into a continuum of pairs of relative states: the object system's relative state becomes a Dirac delta function each centered on a particular value of q and the corresponding observer relative state representing an observer having recorded the value of q. The states of the pairs of relative states are, post measurement, correlated with each other. In Everett's scheme, there is no collapse; instead, the Schrödinger equation, or its quantum field theory, relativistic analog, holds all the time, everywhere. An observation or measurement is modeled by applying the wave equation to the entire system, comprising the object being observed and the observer. One consequence is that every observation causes the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches. Thus the process of measurement or observation, or any correlation-inducing interaction, splits the system into sets of relative states, where each set of relative states, forming a branch of the universal wave function, is consistent within itself, and all future measurements (including by multiple observers) will confirm this consistency. Everett had referred to the combined observer–object system as split by an observation, each split corresponding to the different or multiple possible outcomes of an observation. These splits generate a branching tree, where each branch is a set of all the states relative to each other. Bryce DeWitt popularized Everett's work with a series of publications calling it the Many Worlds Interpretation. Focusing on the splitting process, DeWitt introduced the term "world" to describe a single branch of that tree, which is a consistent history. All observations or measurements within any branch are consistent with each other. Since many observation-like events have happened and are constantly happening, there are an enormous and growing number of simultaneously existing states or "worlds". MWI removes the observer-dependent role in the quantum measurement process by replacing wave function collapse with the established mechanism of quantum decoherence. As the observer's role lies at the heart of all "quantum paradoxes" such as the EPR paradox and von Neumann's "boundary problem", this provides a clearer and easier approach to their resolution. Since the Copenhagen interpretation requires the existence of a classical domain beyond the one described by quantum mechanics, it has been criticized as inadequate for the study of cosmology. While there is no evidence that Everett was inspired by issues of cosmology, he developed his theory with the explicit goal of allowing quantum mechanics to be applied to the universe as a whole, hoping to stimulate the discovery of new phenomena. This hope has been realized in the later development of quantum cosmology. MWI is a realist, deterministic and local theory. It achieves this by removing wave function collapse, which is indeterministic and nonlocal, from the deterministic and local equations of quantum theory. MWI (like other, broader multiverse theories) provides a context for the anthropic principle, which may provide an explanation for the fine-tuned universe. MWI depends crucially on the linearity of quantum mechanics, which underpins the superposition principle. If the final theory of everything is non-linear with respect to wavefunctions, then many-worlds is invalid. All quantum field theories are linear and compatible with the MWI, a point Everett emphasized as a motivation for the MWI. While quantum gravity or string theory may be non-linear in this respect, there is as yet no evidence of this. As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) pass through the double slit, a calculation assuming wavelike behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves. Some versions of the Copenhagen interpretation of quantum mechanics proposed a process of "collapse" in which an indeterminate quantum system would probabilistically collapse onto, or select, just one determinate outcome to "explain" this phenomenon of observation. Wave function collapse was widely regarded as artificial and ad hoc, so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable. Everett's PhD work provided such an interpretation. He argued that for a composite system—such as a subject (the "observer" or measuring apparatus) observing an object (the "observed" system, such as a particle)—the claim that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled: we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wave function collapse) the notion of a relativity of states. Everett noticed that the unitary, deterministic dynamics alone entailed that after an observation is made each element of the quantum superposition of the combined subject–object wave function contains two "relative states": a "collapsed" object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject–object states proceeds with complete indifference as to the presence or absence of the other elements, as if wave function collapse has occurred, which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the object's wave function's collapse has emerged from the unitary, deterministic theory itself. (This answered Einstein's early criticism of quantum theory: that the theory should define what is observed, not for the observables to define the theory.) Since the wave function appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed. And so, invoking Occam's razor, he removed the postulate of wave function collapse from the theory. In 1985, David Deutsch proposed a variant of the Wigner's friend thought experiment as a test of many-worlds versus the Copenhagen interpretation. It consists of an experimenter (Wigner's friend) making a measurement on a quantum system in an isolated laboratory, and another experimenter (Wigner) who would make a measurement on the first one. According to the many-worlds theory, the first experimenter would end up in a macroscopic superposition of seeing one result of the measurement in one branch, and another result in another branch. The second experimenter could then interfere these two branches in order to test whether it is in fact in a macroscopic superposition or has collapsed into a single branch, as predicted by the Copenhagen interpretation. Since then Lockwood, Vaidman, and others have made similar proposals, which require placing macroscopic objects in a coherent superposition and interfering them, a task currently beyond experimental capability. Since the many-worlds interpretation's inception, physicists have been puzzled about the role of probability in it. As put by Wallace, there are two facets to the question: the incoherence problem, which asks why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and the quantitative problem, which asks why the probabilities should be given by the Born rule. Everett tried to answer these questions in the paper that introduced many-worlds. To address the incoherence problem, he argued that an observer who makes a sequence of measurements on a quantum system will in general have an apparently random sequence of results in their memory, which justifies the use of probabilities to describe the measurement process. To address the quantitative problem, Everett proposed a derivation of the Born rule based on the properties that a measure on the branches of the wave function should have. His derivation has been criticized as relying on unmotivated assumptions. Since then several other derivations of the Born rule in the many-worlds framework have been proposed. There is no consensus on whether this has been successful. DeWitt and Graham and Farhi et al., among others, have proposed derivations of the Born rule based on a frequentist interpretation of probability. They try to show that in the limit of uncountably many measurements, no worlds would have relative frequencies that didn't match the probabilities given by the Born rule, but these derivations have been shown to be mathematically incorrect. A decision-theoretic derivation of the Born rule was produced by David Deutsch (1999) and refined by Wallace and Saunders. They consider an agent who takes part in a quantum gamble: the agent makes a measurement on a quantum system, branches as a consequence, and each of the agent's future selves receives a reward that depends on the measurement result. The agent uses decision theory to evaluate the price they would pay to take part in such a gamble, and concludes that the price is given by the utility of the rewards weighted according to the Born rule. Some reviews have been positive, although these arguments remain highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes. For example, a New Scientist story on a 2007 conference about Everettian interpretations quoted physicist Andy Albrecht as saying, "This work will go down as one of the most important developments in the history of science." In contrast, the philosopher Huw Price, also attending the conference, found the Deutsch–Wallace–Saunders approach fundamentally flawed. In 2005, Zurek produced a derivation of the Born rule based on the symmetries of entangled states; Schlosshauer and Fine argue that Zurek's derivation is not rigorous, as it does not define what probability is and has several unstated assumptions about how it should behave. In 2016, Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman, proposed a similar approach based on self-locating uncertainty. In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. The Sebens–Carroll approach has been criticized by Adrian Kent, and Vaidman does not find it satisfactory. In 2021, Simon Saunders produced a branch counting derivation of the Born rule. The crucial feature of this approach is to define the branches so that they all have the same magnitude or 2-norm. The ratios of the numbers of branches thus defined give the probabilities of the various outcomes of a measurement, in accordance with the Born rule. As originally formulated by Everett and DeWitt, the many-worlds interpretation had a privileged role for measurements: they determined which basis of a quantum system would give rise to the eponymous worlds. Without this the theory was ambiguous, as a quantum state can equally well be described (e.g.) as having a well-defined position or as being a superposition of two delocalized states. The assumption is that the preferred basis to use is the one which assigns a unique measurement outcome to each world. This special role for measurements is problematic for the theory, as it contradicts Everett and DeWitt's goal of having a reductionist theory and undermines their criticism of the ill-defined measurement postulate of the Copenhagen interpretation. This is known today as the preferred basis problem. The preferred basis problem has been solved, according to Saunders and Wallace, among others, by incorporating decoherence into the many-worlds theory. In this approach, the preferred basis does not have to be postulated, but rather is identified as the basis stable under environmental decoherence. In this way measurements no longer play a special role; rather, any interaction that causes decoherence causes the world to split. Since decoherence is never complete, there will always remain some infinitesimal overlap between two worlds, making it arbitrary whether a pair of worlds has split or not. Wallace argues that this is not problematic: it only shows that worlds are not a part of the fundamental ontology, but rather of the emergent ontology, where these approximate, effective descriptions are routine in the physical sciences. Since in this approach the worlds are derived, it follows that they must be present in any other interpretation of quantum mechanics that does not have a collapse mechanism, such as Bohmian mechanics. This approach to deriving the preferred basis has been criticized as creating circularity with derivations of probability in the many-worlds interpretation, as decoherence theory depends on probability and probability depends on the ontology derived from decoherence. Wallace contends that decoherence theory depends not on probability but only on the notion that one is allowed to do approximations in physics. MWI originated in Everett's Princeton University PhD thesis "The Theory of the Universal Wave Function", developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state"; Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt, who was responsible for the wider popularization of Everett's theory, which had been largely ignored for a decade after publication in 1957. Everett's proposal was not without precedent. In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were "not alternatives but all really happen simultaneously". According to David Deutsch, this is the earliest known reference to many-worlds; Jeffrey A. Barrett describes it as indicating the similarity of "general views" between Everett and Schrödinger. Schrödinger's writings from the period also contain elements resembling the modal interpretation originated by Bas van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wave function as physical and treating it as information became interchangeable. Leon Cooper and Deborah Van Vechten developed a very similar approach before reading Everett's work. Zeh also came to the same conclusions as Everett before reading his work, then built a new theory of quantum decoherence based on these ideas. According to people who knew him, Everett believed in the literal reality of the other quantum worlds. His son and wife reported that he "never wavered in his belief over his many-worlds theory". In their detailed review of Everett's work, Osnaghi, Freitas, and Freire Jr. note that Everett consistently used quotes around "real" to indicate a meaning within scientific practice. MWI's initial reception was overwhelmingly negative, in the sense that it was ignored, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory. Everett had already left academia in 1957, never to return, and in 1980, Wheeler disavowed the theory. One of MWI's strongest longtime advocates is David Deutsch. According to him, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing, Deutsch suggested that parallelism that results from MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". He also proposed that MWI will be testable (at least against "naive" Copenhagenism) when reversible computers become conscious via the reversible observation of spin. Philosophers of science James Ladyman and Don Ross say that MWI could be true, but do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy. Some scientists consider MWI unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann worked toward the development a more "palatable" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists find MWI too extreme, though it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse". Roger Penrose argues that the idea is flawed because it is based on an oversimplified version of quantum mechanics that does not account for gravity. In his view, applying conventional quantum mechanics to the universe implies the MWI, but the lack of a successful theory of quantum gravity negates the claimed universality of conventional quantum mechanics. According to Penrose, "the rules must change when gravity is involved". He further asserts that gravity helps anchor reality and "blurry" events have only one allowable outcome: "electrons, atoms, molecules, etc., are so minute that they require almost no amount of energy to maintain their gravity, and therefore their overlapping states. They can stay in that state forever, as described in standard quantum theory". On the other hand, "in the case of large objects, the duplicate states disappear in an instant due to the fact that these objects create a large gravitational field". Philosopher of science Robert P. Crease says that MWI is "one of the most implausible and unrealistic ideas in the history of science" because it means that everything conceivable happens. Science writer Philip Ball calls MWI's implications fantasies, since "beneath their apparel of scientific equations or symbolic logic, they are acts of imagination, of 'just supposing'". Theoretical physicist Gerard 't Hooft also dismisses the idea: "I do not believe that we have to live with the many-worlds interpretation. Indeed, it would be a stupendous number of parallel worlds, which are only there because physicists couldn't decide which of them is real." Asher Peres was an outspoken critic of MWI. A section of his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres argued that the various many-worlds interpretations merely shift the arbitrariness or vagueness of the collapse postulate to the question of when "worlds" can be regarded as separate, and that no objective criterion for that separation can actually be formulated. A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true". Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations." In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory", Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'" A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored. A 2011 poll of 33 participants at an Austrian conference found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen; the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll. Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics. Purportedly, it can distinguish between the Copenhagen interpretation of quantum mechanics and the many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. Most experts believe that the experiment would not work in the real world, because the world with the surviving experimenter has a lower "measure" than the world before the experiment, making it less likely that the experimenter will experience their survival. DeWitt has said that "[Everett, Wheeler and Graham] do not in the end exclude any element of the superposition. All the worlds are there, even those in which everything goes wrong and all the statistical laws break down." Tegmark has affirmed that absurd or highly unlikely events are inevitable but rare under MWI: "Things inconsistent with the laws of physics will never happen—everything else will... it's important to keep track of the statistics, since even if everything conceivable happens somewhere, really freak events happen only exponentially rarely." According to Ladyman and Ross, in general, many of the unrealized possibilities discussed in other scientific fields have no counterparts in other branches, because they are in fact incompatible with the universal wave function. David Deutsch speculates in his book The Beginning of Infinity that a great deal of fiction could occur somewhere in the multiverse. For example, the historical speculations entertained within the alternate history genre might be realized in possible parallel universes, except those that break the laws of physics. As John Gribbin puts it, expanding on this point, "There really is, for example, a Wuthering Heights world (but not a Harry Potter world)."
[ { "paragraph_id": 0, "text": "The many-worlds interpretation (MWI) is a philosophical position about how the mathematics used in quantum mechanics relates to physical reality. It asserts that the universal wavefunction is objectively real, and that there is no wave function collapse. This implies that all possible outcomes of quantum measurements are physically realized in some \"world\" or universe. In contrast to some other interpretations, the evolution of reality as a whole in MWI is rigidly deterministic and local. Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957. Bryce DeWitt popularized the formulation and named it many-worlds in the 1970s.", "title": "" }, { "paragraph_id": 1, "text": "In modern versions of many-worlds, the subjective appearance of wave function collapse is explained by the mechanism of quantum decoherence. Decoherence approaches to interpreting quantum theory have been widely explored and developed since the 1970s. MWI is considered a mainstream interpretation of quantum mechanics, along with the other decoherence interpretations, the Copenhagen interpretation, and hidden variable theories such as Bohmian mechanics.", "title": "" }, { "paragraph_id": 2, "text": "The many-worlds interpretation implies that there are most likely an uncountable number of universes. It is one of a number of multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realized. This is intended to resolve the measurement problem and thus some paradoxes of quantum theory, such as Wigner's friend, the EPR paradox and Schrödinger's cat, since every possible outcome of a quantum event exists in its own universe.", "title": "" }, { "paragraph_id": 3, "text": "The many-worlds interpretation's key idea is that the linear and unitary dynamics of quantum mechanics applies everywhere and at all times and so describes the whole universe. In particular, it models a measurement as a unitary transformation, a correlation-inducing interaction, between observer and object, without using a collapse postulate, and models observers as ordinary quantum-mechanical systems. This stands in sharp contrast to the Copenhagen interpretation, in which a measurement is a \"primitive\" concept, not describable by unitary quantum mechanics; in Copenhagen the universe is divided into a quantum and a classical domain, and the collapse postulate is central. In MWI there is no division between classical and quantum: everything is quantum and there is no collapse. MWI's main conclusion is that the universe (or multiverse in this context) is composed of a quantum superposition of an uncountable or undefinable amount or number of increasingly divergent, non-communicating parallel universes or quantum worlds. Sometimes dubbed Everett worlds, each is an internally consistent and actualized alternative history or timeline.", "title": "Overview of the interpretation" }, { "paragraph_id": 4, "text": "The many-worlds interpretation uses decoherence to explain the measurement process and the emergence of a quasi-classical world. Wojciech H. Zurek, one of decoherence theory's pioneers, said: \"Under scrutiny of the environment, only pointer states remain unchanged. Other states decohere into mixtures of stable pointer states that can persist, and, in this sense, exist: They are einselected.\" Zurek emphasizes that his work does not depend on a particular interpretation.", "title": "Overview of the interpretation" }, { "paragraph_id": 5, "text": "The many-worlds interpretation shares many similarities with the decoherent histories interpretation, which also uses decoherence to explain the process of measurement or wave function collapse. MWI treats the other histories or worlds as real, since it regards the universal wave function as the \"basic physical entity\" or \"the fundamental entity, obeying at all times a deterministic wave equation\". The decoherent histories interpretation, on the other hand, needs only one of the histories (or worlds) to be real.", "title": "Overview of the interpretation" }, { "paragraph_id": 6, "text": "Several authors, including Everett, John Archibald Wheeler and David Deutsch, call many-worlds a theory or metatheory, rather than just an interpretation. Everett argued that it was the \"only completely coherent approach to explaining both the contents of quantum mechanics and the appearance of the world.\" Deutsch dismissed the idea that many-worlds is an \"interpretation\", saying that to call it an interpretation \"is like talking about dinosaurs as an 'interpretation' of fossil records.\"", "title": "Overview of the interpretation" }, { "paragraph_id": 7, "text": "In his 1957 doctoral dissertation, Everett proposed that, rather than relying on external observation for analysis of isolated quantum systems, one could mathematically model an object, as well as its observers, as purely physical systems within the mathematical framework developed by Paul Dirac, John von Neumann, and others, discarding altogether the ad hoc mechanism of wave function collapse.", "title": "Overview of the interpretation" }, { "paragraph_id": 8, "text": "Everett's original work introduced the concept of a relative state. Two (or more) subsystems, after a general interaction, become correlated, or as is now said, entangled. Everett noted that such entangled systems can be expressed as the sum of products of states, where the two or more subsystems are each in a state relative to each other. After a measurement or observation one of the pair (or triple...) is the measured, object or observed system, and one other member is the measuring apparatus (which may include an observer) having recorded the state of the measured system. Each product of subsystem states in the overall superposition evolves over time independently of other products. Once the subsystems interact, their states have become correlated or entangled and can no longer be considered independent. In Everett's terminology, each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted.", "title": "Overview of the interpretation" }, { "paragraph_id": 9, "text": "In the example of Schrödinger's cat, after the box is opened, the entangled system is the cat, the poison vial and the observer. One relative triple of states would be the alive cat, the unbroken vial and the observer seeing an alive cat. Another relative triple of states would be the dead cat, the broken vial and the observer seeing a dead cat.", "title": "Overview of the interpretation" }, { "paragraph_id": 10, "text": "In the example of a measurement of a continuous variable (e.g. position q) the object-observer system decomposes into a continuum of pairs of relative states: the object system's relative state becomes a Dirac delta function each centered on a particular value of q and the corresponding observer relative state representing an observer having recorded the value of q. The states of the pairs of relative states are, post measurement, correlated with each other.", "title": "Overview of the interpretation" }, { "paragraph_id": 11, "text": "In Everett's scheme, there is no collapse; instead, the Schrödinger equation, or its quantum field theory, relativistic analog, holds all the time, everywhere. An observation or measurement is modeled by applying the wave equation to the entire system, comprising the object being observed and the observer. One consequence is that every observation causes the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches.", "title": "Overview of the interpretation" }, { "paragraph_id": 12, "text": "Thus the process of measurement or observation, or any correlation-inducing interaction, splits the system into sets of relative states, where each set of relative states, forming a branch of the universal wave function, is consistent within itself, and all future measurements (including by multiple observers) will confirm this consistency.", "title": "Overview of the interpretation" }, { "paragraph_id": 13, "text": "Everett had referred to the combined observer–object system as split by an observation, each split corresponding to the different or multiple possible outcomes of an observation. These splits generate a branching tree, where each branch is a set of all the states relative to each other. Bryce DeWitt popularized Everett's work with a series of publications calling it the Many Worlds Interpretation. Focusing on the splitting process, DeWitt introduced the term \"world\" to describe a single branch of that tree, which is a consistent history. All observations or measurements within any branch are consistent with each other.", "title": "Overview of the interpretation" }, { "paragraph_id": 14, "text": "Since many observation-like events have happened and are constantly happening, there are an enormous and growing number of simultaneously existing states or \"worlds\".", "title": "Overview of the interpretation" }, { "paragraph_id": 15, "text": "MWI removes the observer-dependent role in the quantum measurement process by replacing wave function collapse with the established mechanism of quantum decoherence. As the observer's role lies at the heart of all \"quantum paradoxes\" such as the EPR paradox and von Neumann's \"boundary problem\", this provides a clearer and easier approach to their resolution.", "title": "Overview of the interpretation" }, { "paragraph_id": 16, "text": "Since the Copenhagen interpretation requires the existence of a classical domain beyond the one described by quantum mechanics, it has been criticized as inadequate for the study of cosmology. While there is no evidence that Everett was inspired by issues of cosmology, he developed his theory with the explicit goal of allowing quantum mechanics to be applied to the universe as a whole, hoping to stimulate the discovery of new phenomena. This hope has been realized in the later development of quantum cosmology.", "title": "Overview of the interpretation" }, { "paragraph_id": 17, "text": "MWI is a realist, deterministic and local theory. It achieves this by removing wave function collapse, which is indeterministic and nonlocal, from the deterministic and local equations of quantum theory.", "title": "Overview of the interpretation" }, { "paragraph_id": 18, "text": "MWI (like other, broader multiverse theories) provides a context for the anthropic principle, which may provide an explanation for the fine-tuned universe.", "title": "Overview of the interpretation" }, { "paragraph_id": 19, "text": "MWI depends crucially on the linearity of quantum mechanics, which underpins the superposition principle. If the final theory of everything is non-linear with respect to wavefunctions, then many-worlds is invalid. All quantum field theories are linear and compatible with the MWI, a point Everett emphasized as a motivation for the MWI. While quantum gravity or string theory may be non-linear in this respect, there is as yet no evidence of this.", "title": "Overview of the interpretation" }, { "paragraph_id": 20, "text": "As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) pass through the double slit, a calculation assuming wavelike behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves.", "title": "Overview of the interpretation" }, { "paragraph_id": 21, "text": "Some versions of the Copenhagen interpretation of quantum mechanics proposed a process of \"collapse\" in which an indeterminate quantum system would probabilistically collapse onto, or select, just one determinate outcome to \"explain\" this phenomenon of observation. Wave function collapse was widely regarded as artificial and ad hoc, so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable.", "title": "Overview of the interpretation" }, { "paragraph_id": 22, "text": "Everett's PhD work provided such an interpretation. He argued that for a composite system—such as a subject (the \"observer\" or measuring apparatus) observing an object (the \"observed\" system, such as a particle)—the claim that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled: we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wave function collapse) the notion of a relativity of states.", "title": "Overview of the interpretation" }, { "paragraph_id": 23, "text": "Everett noticed that the unitary, deterministic dynamics alone entailed that after an observation is made each element of the quantum superposition of the combined subject–object wave function contains two \"relative states\": a \"collapsed\" object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject–object states proceeds with complete indifference as to the presence or absence of the other elements, as if wave function collapse has occurred, which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the object's wave function's collapse has emerged from the unitary, deterministic theory itself. (This answered Einstein's early criticism of quantum theory: that the theory should define what is observed, not for the observables to define the theory.) Since the wave function appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed. And so, invoking Occam's razor, he removed the postulate of wave function collapse from the theory.", "title": "Overview of the interpretation" }, { "paragraph_id": 24, "text": "In 1985, David Deutsch proposed a variant of the Wigner's friend thought experiment as a test of many-worlds versus the Copenhagen interpretation. It consists of an experimenter (Wigner's friend) making a measurement on a quantum system in an isolated laboratory, and another experimenter (Wigner) who would make a measurement on the first one. According to the many-worlds theory, the first experimenter would end up in a macroscopic superposition of seeing one result of the measurement in one branch, and another result in another branch. The second experimenter could then interfere these two branches in order to test whether it is in fact in a macroscopic superposition or has collapsed into a single branch, as predicted by the Copenhagen interpretation. Since then Lockwood, Vaidman, and others have made similar proposals, which require placing macroscopic objects in a coherent superposition and interfering them, a task currently beyond experimental capability.", "title": "Overview of the interpretation" }, { "paragraph_id": 25, "text": "Since the many-worlds interpretation's inception, physicists have been puzzled about the role of probability in it. As put by Wallace, there are two facets to the question: the incoherence problem, which asks why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and the quantitative problem, which asks why the probabilities should be given by the Born rule.", "title": "Probability and the Born rule" }, { "paragraph_id": 26, "text": "Everett tried to answer these questions in the paper that introduced many-worlds. To address the incoherence problem, he argued that an observer who makes a sequence of measurements on a quantum system will in general have an apparently random sequence of results in their memory, which justifies the use of probabilities to describe the measurement process. To address the quantitative problem, Everett proposed a derivation of the Born rule based on the properties that a measure on the branches of the wave function should have. His derivation has been criticized as relying on unmotivated assumptions. Since then several other derivations of the Born rule in the many-worlds framework have been proposed. There is no consensus on whether this has been successful.", "title": "Probability and the Born rule" }, { "paragraph_id": 27, "text": "DeWitt and Graham and Farhi et al., among others, have proposed derivations of the Born rule based on a frequentist interpretation of probability. They try to show that in the limit of uncountably many measurements, no worlds would have relative frequencies that didn't match the probabilities given by the Born rule, but these derivations have been shown to be mathematically incorrect.", "title": "Probability and the Born rule" }, { "paragraph_id": 28, "text": "A decision-theoretic derivation of the Born rule was produced by David Deutsch (1999) and refined by Wallace and Saunders. They consider an agent who takes part in a quantum gamble: the agent makes a measurement on a quantum system, branches as a consequence, and each of the agent's future selves receives a reward that depends on the measurement result. The agent uses decision theory to evaluate the price they would pay to take part in such a gamble, and concludes that the price is given by the utility of the rewards weighted according to the Born rule. Some reviews have been positive, although these arguments remain highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes. For example, a New Scientist story on a 2007 conference about Everettian interpretations quoted physicist Andy Albrecht as saying, \"This work will go down as one of the most important developments in the history of science.\" In contrast, the philosopher Huw Price, also attending the conference, found the Deutsch–Wallace–Saunders approach fundamentally flawed.", "title": "Probability and the Born rule" }, { "paragraph_id": 29, "text": "In 2005, Zurek produced a derivation of the Born rule based on the symmetries of entangled states; Schlosshauer and Fine argue that Zurek's derivation is not rigorous, as it does not define what probability is and has several unstated assumptions about how it should behave.", "title": "Probability and the Born rule" }, { "paragraph_id": 30, "text": "In 2016, Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman, proposed a similar approach based on self-locating uncertainty. In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. The Sebens–Carroll approach has been criticized by Adrian Kent, and Vaidman does not find it satisfactory.", "title": "Probability and the Born rule" }, { "paragraph_id": 31, "text": "In 2021, Simon Saunders produced a branch counting derivation of the Born rule. The crucial feature of this approach is to define the branches so that they all have the same magnitude or 2-norm. The ratios of the numbers of branches thus defined give the probabilities of the various outcomes of a measurement, in accordance with the Born rule.", "title": "Probability and the Born rule" }, { "paragraph_id": 32, "text": "As originally formulated by Everett and DeWitt, the many-worlds interpretation had a privileged role for measurements: they determined which basis of a quantum system would give rise to the eponymous worlds. Without this the theory was ambiguous, as a quantum state can equally well be described (e.g.) as having a well-defined position or as being a superposition of two delocalized states. The assumption is that the preferred basis to use is the one which assigns a unique measurement outcome to each world. This special role for measurements is problematic for the theory, as it contradicts Everett and DeWitt's goal of having a reductionist theory and undermines their criticism of the ill-defined measurement postulate of the Copenhagen interpretation. This is known today as the preferred basis problem.", "title": "The preferred basis problem" }, { "paragraph_id": 33, "text": "The preferred basis problem has been solved, according to Saunders and Wallace, among others, by incorporating decoherence into the many-worlds theory. In this approach, the preferred basis does not have to be postulated, but rather is identified as the basis stable under environmental decoherence. In this way measurements no longer play a special role; rather, any interaction that causes decoherence causes the world to split. Since decoherence is never complete, there will always remain some infinitesimal overlap between two worlds, making it arbitrary whether a pair of worlds has split or not. Wallace argues that this is not problematic: it only shows that worlds are not a part of the fundamental ontology, but rather of the emergent ontology, where these approximate, effective descriptions are routine in the physical sciences. Since in this approach the worlds are derived, it follows that they must be present in any other interpretation of quantum mechanics that does not have a collapse mechanism, such as Bohmian mechanics.", "title": "The preferred basis problem" }, { "paragraph_id": 34, "text": "This approach to deriving the preferred basis has been criticized as creating circularity with derivations of probability in the many-worlds interpretation, as decoherence theory depends on probability and probability depends on the ontology derived from decoherence. Wallace contends that decoherence theory depends not on probability but only on the notion that one is allowed to do approximations in physics.", "title": "The preferred basis problem" }, { "paragraph_id": 35, "text": "MWI originated in Everett's Princeton University PhD thesis \"The Theory of the Universal Wave Function\", developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title \"Relative State Formulation of Quantum Mechanics\" (Wheeler contributed the title \"relative state\"; Everett originally called his approach the \"Correlation Interpretation\", where \"correlation\" refers to quantum entanglement). The phrase \"many-worlds\" is due to Bryce DeWitt, who was responsible for the wider popularization of Everett's theory, which had been largely ignored for a decade after publication in 1957.", "title": "History" }, { "paragraph_id": 36, "text": "Everett's proposal was not without precedent. In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might \"seem lunatic\". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were \"not alternatives but all really happen simultaneously\". According to David Deutsch, this is the earliest known reference to many-worlds; Jeffrey A. Barrett describes it as indicating the similarity of \"general views\" between Everett and Schrödinger. Schrödinger's writings from the period also contain elements resembling the modal interpretation originated by Bas van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which \"matter\" and \"mind\" are only different aspects or arrangements of the same common elements, treating the wave function as physical and treating it as information became interchangeable.", "title": "History" }, { "paragraph_id": 37, "text": "Leon Cooper and Deborah Van Vechten developed a very similar approach before reading Everett's work. Zeh also came to the same conclusions as Everett before reading his work, then built a new theory of quantum decoherence based on these ideas.", "title": "History" }, { "paragraph_id": 38, "text": "According to people who knew him, Everett believed in the literal reality of the other quantum worlds. His son and wife reported that he \"never wavered in his belief over his many-worlds theory\". In their detailed review of Everett's work, Osnaghi, Freitas, and Freire Jr. note that Everett consistently used quotes around \"real\" to indicate a meaning within scientific practice.", "title": "History" }, { "paragraph_id": 39, "text": "MWI's initial reception was overwhelmingly negative, in the sense that it was ignored, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory. Everett had already left academia in 1957, never to return, and in 1980, Wheeler disavowed the theory.", "title": "Reception" }, { "paragraph_id": 40, "text": "One of MWI's strongest longtime advocates is David Deutsch. According to him, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing, Deutsch suggested that parallelism that results from MWI could lead to \"a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it\". He also proposed that MWI will be testable (at least against \"naive\" Copenhagenism) when reversible computers become conscious via the reversible observation of spin.", "title": "Reception" }, { "paragraph_id": 41, "text": "Philosophers of science James Ladyman and Don Ross say that MWI could be true, but do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy.", "title": "Reception" }, { "paragraph_id": 42, "text": "Some scientists consider MWI unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them.", "title": "Reception" }, { "paragraph_id": 43, "text": "Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann worked toward the development a more \"palatable\" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists find MWI too extreme, though it \"has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse\".", "title": "Reception" }, { "paragraph_id": 44, "text": "Roger Penrose argues that the idea is flawed because it is based on an oversimplified version of quantum mechanics that does not account for gravity. In his view, applying conventional quantum mechanics to the universe implies the MWI, but the lack of a successful theory of quantum gravity negates the claimed universality of conventional quantum mechanics. According to Penrose, \"the rules must change when gravity is involved\". He further asserts that gravity helps anchor reality and \"blurry\" events have only one allowable outcome: \"electrons, atoms, molecules, etc., are so minute that they require almost no amount of energy to maintain their gravity, and therefore their overlapping states. They can stay in that state forever, as described in standard quantum theory\". On the other hand, \"in the case of large objects, the duplicate states disappear in an instant due to the fact that these objects create a large gravitational field\".", "title": "Reception" }, { "paragraph_id": 45, "text": "Philosopher of science Robert P. Crease says that MWI is \"one of the most implausible and unrealistic ideas in the history of science\" because it means that everything conceivable happens. Science writer Philip Ball calls MWI's implications fantasies, since \"beneath their apparel of scientific equations or symbolic logic, they are acts of imagination, of 'just supposing'\".", "title": "Reception" }, { "paragraph_id": 46, "text": "Theoretical physicist Gerard 't Hooft also dismisses the idea: \"I do not believe that we have to live with the many-worlds interpretation. Indeed, it would be a stupendous number of parallel worlds, which are only there because physicists couldn't decide which of them is real.\"", "title": "Reception" }, { "paragraph_id": 47, "text": "Asher Peres was an outspoken critic of MWI. A section of his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres argued that the various many-worlds interpretations merely shift the arbitrariness or vagueness of the collapse postulate to the question of when \"worlds\" can be regarded as separate, and that no objective criterion for that separation can actually be formulated.", "title": "Reception" }, { "paragraph_id": 48, "text": "A poll of 72 \"leading quantum cosmologists and other quantum field theorists\" conducted before 1991 by L. David Raub showed 58% agreement with \"Yes, I think MWI is true\".", "title": "Reception" }, { "paragraph_id": 49, "text": "Max Tegmark reports the result of a \"highly unscientific\" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, \"The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations.\"", "title": "Reception" }, { "paragraph_id": 50, "text": "In response to Sean M. Carroll's statement \"As crazy as it sounds, most working physicists buy into the many-worlds theory\", Michael Nielsen counters: \"at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence.\" But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres \"got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'\"", "title": "Reception" }, { "paragraph_id": 51, "text": "A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found \"Many Worlds (and decoherence)\" to be the least favored.", "title": "Reception" }, { "paragraph_id": 52, "text": "A 2011 poll of 33 participants at an Austrian conference found 6 endorsed MWI, 8 \"Information-based/information-theoretical\", and 14 Copenhagen; the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.", "title": "Reception" }, { "paragraph_id": 53, "text": "Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics. Purportedly, it can distinguish between the Copenhagen interpretation of quantum mechanics and the many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide.", "title": "Speculative implications" }, { "paragraph_id": 54, "text": "Most experts believe that the experiment would not work in the real world, because the world with the surviving experimenter has a lower \"measure\" than the world before the experiment, making it less likely that the experimenter will experience their survival.", "title": "Speculative implications" }, { "paragraph_id": 55, "text": "DeWitt has said that \"[Everett, Wheeler and Graham] do not in the end exclude any element of the superposition. All the worlds are there, even those in which everything goes wrong and all the statistical laws break down.\"", "title": "Speculative implications" }, { "paragraph_id": 56, "text": "Tegmark has affirmed that absurd or highly unlikely events are inevitable but rare under MWI: \"Things inconsistent with the laws of physics will never happen—everything else will... it's important to keep track of the statistics, since even if everything conceivable happens somewhere, really freak events happen only exponentially rarely.\"", "title": "Speculative implications" }, { "paragraph_id": 57, "text": "According to Ladyman and Ross, in general, many of the unrealized possibilities discussed in other scientific fields have no counterparts in other branches, because they are in fact incompatible with the universal wave function.", "title": "Speculative implications" }, { "paragraph_id": 58, "text": "David Deutsch speculates in his book The Beginning of Infinity that a great deal of fiction could occur somewhere in the multiverse. For example, the historical speculations entertained within the alternate history genre might be realized in possible parallel universes, except those that break the laws of physics. As John Gribbin puts it, expanding on this point, \"There really is, for example, a Wuthering Heights world (but not a Harry Potter world).\"", "title": "Speculative implications" } ]
The many-worlds interpretation (MWI) is a philosophical position about how the mathematics used in quantum mechanics relates to physical reality. It asserts that the universal wavefunction is objectively real, and that there is no wave function collapse. This implies that all possible outcomes of quantum measurements are physically realized in some "world" or universe. In contrast to some other interpretations, the evolution of reality as a whole in MWI is rigidly deterministic and local. Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957. Bryce DeWitt popularized the formulation and named it many-worlds in the 1970s. In modern versions of many-worlds, the subjective appearance of wave function collapse is explained by the mechanism of quantum decoherence. Decoherence approaches to interpreting quantum theory have been widely explored and developed since the 1970s. MWI is considered a mainstream interpretation of quantum mechanics, along with the other decoherence interpretations, the Copenhagen interpretation, and hidden variable theories such as Bohmian mechanics. The many-worlds interpretation implies that there are most likely an uncountable number of universes. It is one of a number of multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realized. This is intended to resolve the measurement problem and thus some paradoxes of quantum theory, such as Wigner's friend, the EPR paradox and Schrödinger's cat, since every possible outcome of a quantum event exists in its own universe.
2001-07-27T03:30:48Z
2023-12-28T22:53:22Z
[ "Template:Cols", "Template:Notelist", "Template:Cite book", "Template:Cite web", "Template:IEP", "Template:Quantum mechanics topics", "Template:'\"", "Template:Main", "Template:Cite journal", "Template:Cite arXiv", "Template:Webarchive", "Template:Sister project links", "Template:Short description", "Template:Rp", "Template:Arxiv", "Template:Time travel", "Template:Quantum mechanics", "Template:Reflist", "Template:Citation", "Template:ISBN", "Template:Cite news", "Template:Efn", "Template:Colend" ]
https://en.wikipedia.org/wiki/Many-worlds_interpretation
9,611
E-commerce
E-commerce (electronic commerce) is the activity of electronically buying or selling products on online services or over the Internet. E-commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. E-commerce is the largest sector of the electronics industry and is in turn driven by the technological advances of the semiconductor industry. The term was coined and first employed by Robert Jacobson, Principal Consultant to the California State Assembly's Utilities & Commerce Committee, in the title and text of California's Electronic Commerce Act, carried by the late Committee Chairwoman Gwen Moore (D-L.A.) and enacted in 1984. E-commerce typically uses the web for at least a part of a transaction's life cycle although it may also use other technologies such as e-mail. Typical e-commerce transactions include the purchase of products (such as books from Amazon) or services (such as music downloads in the form of digital distribution such as the iTunes Store). There are three areas of e-commerce: online retailing, electronic markets, and online auctions. E-commerce is supported by electronic business. The existence value of e-commerce is to allow consumers to shop online and pay online through the Internet, saving the time and space of customers and enterprises, greatly improving transaction efficiency, especially for busy office workers, and also saving a lot of valuable time. E-commerce businesses may also employ some or all of the following: There are five essential categories of E-commerce: Contemporary electronic commerce can be classified into two categories. The first category is business based on types of goods sold (involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce). The second category is based on the nature of the participant (B2B, B2C, C2B and C2C). On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are pressing issues for electronic commerce. Aside from traditional e-commerce, the terms m-Commerce (mobile commerce) as well (around 2013) t-Commerce have also been used. In the United States, California's Electronic Commerce Act (1984), enacted by the Legislature, the more recent California Privacy Rights Act (2020), enacted through a popular election proposition and to control specifically how electronic commerce may be conducted in California. In the US in its entirety, electronic commerce activities are regulated more broadly by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive. Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers' personal information. As a result, any corporate privacy policy related to e-commerce activity may be subject to enforcement by the FTC. The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies. Conflict of laws in cyberspace is a major hurdle for harmonization of legal framework for e-commerce around the world. In order to give a uniformity to e-commerce law around the world, many countries adopted the UNCITRAL Model Law on Electronic Commerce (1996). Internationally there is the International Consumer Protection and Enforcement Network (ICPEN), which was formed in 1991 from an informal network of government customer fair trade organisations. The purpose was stated as being to find ways of co-operating on tackling consumer problems connected with cross-border transactions in both goods and services, and to help ensure exchanges of information among the participants for mutual benefit and understanding. From this came Econsumer.gov, an ICPEN initiative since April 2001. It is a portal to report complaints about online and related transactions with foreign companies. There is also Asia Pacific Economic Cooperation. APEC was established in 1989 with the vision of achieving stability, security and prosperity for the region through free and open trade and investment. APEC has an Electronic Commerce Steering Group as well as working on common privacy regulations throughout the APEC region. In Australia, trade is covered under Australian Treasury Guidelines for electronic commerce and the Australian Competition & Consumer Commission regulates and offers advice on how to deal with businesses online, and offers specific advice on what happens if things go wrong. The European Union undertook an extensive enquiry into e-commerce in 2015-16 which observed significant growth in the development of e-commerce, along with some developments which raised concerns, such as increased use of selective distribution systems, which allow manufacturers to control routes to market, and "increased use of contractual restrictions to better control product distribution". The European Commission felt that some emerging practices might be justified if they could improve the quality of product distribution, but "others may unduly prevent consumers from benefiting from greater product choice and lower prices in e-commerce and therefore warrant Commission action" in order to promote compliance with EU competition rules. In the United Kingdom, the Financial Services Authority (FSA) was formerly the regulating authority for most aspects of the EU's Payment Services Directive (PSD), until its replacement in 2013 by the Prudential Regulation Authority and the Financial Conduct Authority. The UK implemented the PSD through the Payment Services Regulations 2009 (PSRs), which came into effect on 1 November 2009. The PSR affects firms providing payment services and their customers. These firms include banks, non-bank credit card issuers and non-bank merchant acquirers, e-money issuers, etc. The PSRs created a new class of regulated firms known as payment institutions (PIs), who are subject to prudential requirements. Article 87 of the PSD requires the European Commission to report on the implementation and impact of the PSD by 1 November 2012. In India, the Information Technology Act 2000 governs the basic applicability of e-commerce. In China, the Telecommunications Regulations of the People's Republic of China (promulgated on 25 September 2000), stipulated the Ministry of Industry and Information Technology (MIIT) as the government department regulating all telecommunications related activities, including electronic commerce. On the same day, the Administrative Measures on Internet Information Services were released, the first administrative regulations to address profit-generating activities conducted through the Internet, and lay the foundation for future regulations governing e-commerce in China. On 28 August 2004, the eleventh session of the tenth NPC Standing Committee adopted an Electronic Signature Law, which regulates data message, electronic signature authentication and legal liability issues. It is considered the first law in China's e-commerce legislation. It was a milestone in the course of improving China's electronic commerce legislation, and also marks the entering of China's rapid development stage for electronic commerce legislation. E-commerce has become an important tool for small and large businesses worldwide, not only to sell to customers, but also to engage them. Cross-border e-Commerce is also an essential field for e-Commerce businesses. It has responded to the trend of globalization. It shows that numerous firms have opened up new businesses, expanded new markets, and overcome trade barriers; more and more enterprises have started exploring the cross-border cooperation field. In addition, compared with traditional cross-border trade, the information on cross-border e-commerce is more concealed. In the era of globalization, cross-border e-commerce for inter-firm companies means the activities, interactions, or social relations of two or more e-commerce enterprises. However, the success of cross-border e-commerce promotes the development of small and medium-sized firms, and it has finally become a new transaction mode. It has helped the companies solve financial problems and realize the reasonable allocation of resources field. SMEs ( small and medium enterprises) can also precisely match the demand and supply in the market, having the industrial chain majorization and creating more revenues for companies. In 2012, e-commerce sales topped $1 trillion for the first time in history. Mobile devices are playing an increasing role in the mix of e-commerce, this is also commonly called mobile commerce, or m-commerce. In 2014, one estimate saw purchases made on mobile devices making up 25% of the market by 2017. For traditional businesses, one research stated that information technology and cross-border e-commerce is a good opportunity for the rapid development and growth of enterprises. Many companies have invested an enormous volume of investment in mobile applications. The DeLone and McLean Model stated that three perspectives contribute to a successful e-business: information system quality, service quality and users' satisfaction. There is no limit of time and space, there are more opportunities to reach out to customers around the world, and to cut down unnecessary intermediate links, thereby reducing the cost price, and can benefit from one on one large customer data analysis, to achieve a high degree of personal customization strategic plan, in order to fully enhance the core competitiveness of the products in the company. Modern 3D graphics technologies, such as Facebook 3D Posts, are considered by some social media marketers and advertisers as a preferable way to promote consumer goods than static photos, and some brands like Sony are already paving the way for augmented reality commerce. Wayfair now lets you inspect a 3D version of its furniture in a home setting before buying. Among emerging economies, China's e-commerce presence continues to expand every year. With 668 million Internet users, China's online shopping sales reached $253 billion in the first half of 2015, accounting for 10% of total Chinese consumer retail sales in that period. The Chinese retailers have been able to help consumers feel more comfortable shopping online. e-commerce transactions between China and other countries increased 32% to 2.3 trillion yuan ($375.8 billion) in 2012 and accounted for 9.6% of China's total international trade. In 2013, Alibaba had an e-commerce market share of 80% in China. In 2014, Alibaba still dominated the B2B marketplace in China with a market share of 44.82%, followed by several other companies including Made-in-China.com at 3.21%, and GlobalSources.com at 2.98%, with the total transaction value of China's B2B market exceeding 4.5 billion yuan. In 2014, there were 600 million Internet users in China (twice as many as in the US), making it the world's biggest online market. China is also the largest e-commerce market in the world by value of sales, with an estimated US$899 billion in 2016. It accounted for 42.4% of worldwide retail e-commerce in that year, the most of any country. Research shows that Chinese consumer motivations are different enough from Western audiences to require unique e-commerce app designs instead of simply porting Western apps into the Chinese market. The expansion of e-commerce in China has resulted in the development of Taobao villages, clusters of e-commerce businesses operating in rural areas. Because Taobao villages have increased the incomes or rural people and entrepreneurship in rural China, Taobao villages have become a component of rural revitalization strategies. In 2010, the United Kingdom had the highest per capita e-commerce spending in the world. As of 2013, the Czech Republic was the European country where e-commerce delivers the biggest contribution to the enterprises' total revenue. Almost a quarter (24%) of the country's total turnover is generated via the online channel. The rate of growth of the number of internet users in the Arab countries has been rapid – 13.1% in 2015. A significant portion of the e-commerce market in the Middle East comprises people in the 30–34 year age group. Egypt has the largest number of internet users in the region, followed by Saudi Arabia and Morocco; these constitute 3/4th of the region's share. Yet, internet penetration is low: 35% in Egypt and 65% in Saudi Arabia. The Gulf Cooperation Council countries have a rapidly growing market and are characterized by a population that becomes wealthier (Yuldashev). As such, retailers have launched Arabic-language websites as a means to target this population. Secondly, there are predictions of increased mobile purchases and an expanding internet audience (Yuldashev). The growth and development of the two aspects make the GCC countries become larger players in the electronic commerce market with time progress. Specifically, research shows that the e-commerce market is expected to grow to over $20 billion by 2020 among these GCC countries (Yuldashev). The e-commerce market has also gained much popularity among western countries, and in particular Europe and the U.S. These countries have been highly characterized by consumer-packaged goods (CPG) (Geisler, 34). However, trends show that there are future signs of a reverse. Similar to the GCC countries, there has been increased purchase of goods and services in online channels rather than offline channels. Activist investors are trying hard to consolidate and slash their overall cost and the governments in western countries continue to impose more regulation on CPG manufacturers (Geisler, 36). In these senses, CPG investors are being forced to adapt to e-commerce as it is effective as well as a means for them to thrive. The future trends in the GCC countries will be similar to that of the western countries. Despite the forces that push business to adapt e-commerce as a means to sell goods and products, the manner in which customers make purchases is similar in countries from these two regions. For instance, there has been an increased usage of smartphones which comes in conjunction with an increase in the overall internet audience from the regions. Yuldashev writes that consumers are scaling up to more modern technology that allows for mobile marketing. However, the percentage of smartphone and internet users who make online purchases is expected to vary in the first few years. It will be independent on the willingness of the people to adopt this new trend (The Statistics Portal). For example, UAE has the greatest smartphone penetration of 73.8 per cent and has 91.9 per cent of its population has access to the internet. On the other hand, smartphone penetration in Europe has been reported to be at 64.7 per cent (The Statistics Portal). Regardless, the disparity in percentage between these regions is expected to level out in future because e-commerce technology is expected to grow to allow for more users. The e-commerce business within these two regions will result in competition. Government bodies at the country level will enhance their measures and strategies to ensure sustainability and consumer protection (Krings, et al.). These increased measures will raise the environmental and social standards in the countries, factors that will determine the success of the e-commerce market in these countries. For example, an adoption of tough sanctions will make it difficult for companies to enter the e-commerce market while lenient sanctions will allow ease of companies. As such, the future trends between GCC countries and the Western countries will be independent of these sanctions (Krings, et al.). These countries need to make rational conclusions in coming up with effective sanctions. India has an Internet user base of about 460 million as of December 2017. Despite being the third largest user base in the world, the penetration of the Internet is low compared to markets like the United States, United Kingdom or France but is growing at a much faster rate, adding around six million new entrants every month. In India, cash on delivery is the most preferred payment method, accumulating 75% of the e-retail activities. The India retail market is expected to rise from 2.5% in 2016 to 5% in 2020. In 2013, Brazil's e-commerce was growing quickly with retail e-commerce sales expected to grow at a double-digit pace through 2014. By 2016, eMarketer expected retail e-commerce sales in Brazil to reach $17.3 billion. Logistics in e-commerce mainly concerns fulfillment. Online markets and retailers have to find the best possible way to fill orders and deliver products. Small companies usually control their own logistic operation because they do not have the ability to hire an outside company. Most large companies hire a fulfillment service that takes care of a company's logistic needs. The optimization of logistics processes that contains long-term investment in an efficient storage infrastructure system and adoption of inventory management strategies is crucial to prioritize customer satisfaction throughout the entire process, from order placement to final delivery. E-commerce markets are growing at noticeable rates. The online market is expected to grow by 56% in 2015–2020. In 2017, retail e-commerce sales worldwide amounted to 2.3 trillion US dollars and e-retail revenues are projected to grow to 4.891 trillion US dollars in 2021. Traditional markets are only expected 2% growth during the same time. Brick and mortar retailers are struggling because of online retailer's ability to offer lower prices and higher efficiency. Many larger retailers are able to maintain a presence offline and online by linking physical and online offerings. E-commerce allows customers to overcome geographical barriers and allows them to purchase products anytime and from anywhere. Online and traditional markets have different strategies for conducting business. Traditional retailers offer fewer assortment of products because of shelf space where, online retailers often hold no inventory but send customer orders directly to the manufacturer. The pricing strategies are also different for traditional and online retailers. Traditional retailers base their prices on store traffic and the cost to keep inventory. Online retailers base prices on the speed of delivery. There are two ways for marketers to conduct business through e-commerce: fully online or online along with a brick and mortar store. Online marketers can offer lower prices, greater product selection, and high efficiency rates. Many customers prefer online markets if the products can be delivered quickly at relatively low price. However, online retailers cannot offer the physical experience that traditional retailers can. It can be difficult to judge the quality of a product without the physical experience, which may cause customers to experience product or seller uncertainty. Another issue regarding the online market is concerns about the security of online transactions. Many customers remain loyal to well-known retailers because of this issue. Security is a primary problem for e-commerce in developed and developing countries. E-commerce security is protecting businesses' websites and customers from unauthorized access, use, alteration, or destruction. The type of threats include: malicious codes, unwanted programs (ad ware, spyware), phishing, hacking, and cyber vandalism. E-commerce websites use different tools to avert security threats. These tools include firewalls, encryption software, digital certificates, and passwords. For a long time, companies had been troubled by the gap between the benefits which supply chain technology has and the solutions to deliver those benefits. However, the emergence of e-commerce has provided a more practical and effective way of delivering the benefits of the new supply chain technologies. E-commerce has the capability to integrate all inter-company and intra-company functions, meaning that the three flows (physical flow, financial flow and information flow) of the supply chain could be also affected by e-commerce. The affections on physical flows improved the way of product and inventory movement level for companies. For the information flows, e-commerce optimized the capacity of information processing than companies used to have, and for the financial flows, e-commerce allows companies to have more efficient payment and settlement solutions. In addition, e-commerce has a more sophisticated level of impact on supply chains: Firstly, the performance gap will be eliminated since companies can identify gaps between different levels of supply chains by electronic means of solutions; Secondly, as a result of e-commerce emergence, new capabilities such implementing ERP systems, like SAP ERP, Xero, or Megaventory, have helped companies to manage operations with customers and suppliers. Yet these new capabilities are still not fully exploited. Thirdly, technology companies would keep investing on new e-commerce software solutions as they are expecting investment return. Fourthly, e-commerce would help to solve many aspects of issues that companies may feel difficult to cope with, such as political barriers or cross-country changes. Finally, e-commerce provides companies a more efficient and effective way to collaborate with each other within the supply chain. E-commerce helps create new job opportunities due to information related services, software app and digital products. It also causes job losses. The areas with the greatest predicted job-loss are retail, postal, and travel agencies. The development of e-commerce will create jobs that require highly skilled workers to manage large amounts of information, customer demands, and production processes. In contrast, people with poor technical skills cannot enjoy the wages welfare. On the other hand, because e-commerce requires sufficient stocks that could be delivered to customers in time, the warehouse becomes an important element. Warehouse needs more staff to manage, supervise and organize, thus the condition of warehouse environment will be concerned by employees. E-commerce brings convenience for customers as they do not have to leave home and only need to browse websites online, especially for buying products which are not sold in nearby shops. It could help customers buy a wider range of products and save customers' time. Consumers also gain power through online shopping. They are able to research products and compare prices among retailers. Thanks to the practice of user-generated ratings and reviews from companies like Bazaarvoice, Trustpilot, and Yelp, customers can also see what other people think of a product, and decide before buying if they want to spend money on it. Also, online shopping often provides sales promotion or discounts code, thus it is more price effective for customers. Moreover, e-commerce provides products' detailed information; even the in-store staff cannot offer such detailed explanation. Customers can also review and track the order history online. E-commerce technologies cut transaction costs by allowing both manufactures and consumers to skip through the intermediaries. This is achieved through by extending the search area best price deals and by group purchase. The success of e-commerce in urban and regional levels depend on how the local firms and consumers have adopted to e-commerce. However, e-commerce lacks human interaction for customers, especially who prefer face-to-face connection. Customers are also concerned with the security of online transactions and tend to remain loyal to well-known retailers. In recent years, clothing retailers such as Tommy Hilfiger have started adding Virtual Fit platforms to their e-commerce sites to reduce the risk of customers buying the wrong sized clothes, although these vary greatly in their fit for purpose. When the customer regret the purchase of a product, it involves returning goods and refunding process. This process is inconvenient as customers need to pack and post the goods. If the products are expensive, large or fragile, it refers to safety issues. In 2018, E-commerce generated 1.3 million short tons (1.2 megatonnes) of container cardboard in North America, an increase from 1.1 million (1.00)) in 2017. Only 35 percent of North American cardboard manufacturing capacity is from recycled content. The recycling rate in Europe is 80 percent and Asia is 93 percent. Amazon, the largest user of boxes, has a strategy to cut back on packing material and has reduced packaging material used by 19 percent by weight since 2016. Amazon is requiring retailers to manufacture their product packaging in a way that does not require additional shipping packaging. Amazon also has an 85-person team researching ways to reduce and improve their packaging and shipping materials. Accelerated movement of packages around the world includes accelerated movement of living things, with all its attendant risks. Weeds, pests, and diseases all sometimes travel in packages of seeds. Some of these packages are part of brushing manipulation of e-commerce reviews. E-commerce has been cited as a major force for the failure of major U.S. retailers in a trend frequently referred to as a "retail apocalypse." The rise of e-commerce outlets like Amazon has made it harder for traditional retailers to attract customers to their stores and forced companies to change their sales strategies. Many companies have turned to sales promotions and increased digital efforts to lure shoppers while shutting down brick-and-mortar locations. The trend has forced some traditional retailers to shutter its brick and mortar operations. In March 2020, global retail website traffic hit 14.3 billion visits signifying an unprecedented growth of e-commerce during the lockdown of 2020. Later studies show that online sales increased by 25% and online grocery shopping increased by over 100% during the crisis in the United States. Meanwhile, as many as 29% of surveyed shoppers state that they will never go back to shopping in person again; in the UK, 43% of consumers state that they expect to keep on shopping the same way even after the lockdown is over. Retail sales of e-commerce shows that COVID-19 has a significant impact on e-commerce and its sales are expected to reach $6.5 trillion by 2023. Some common applications related to electronic commerce are: A timeline for the development of e-commerce:
[ { "paragraph_id": 0, "text": "E-commerce (electronic commerce) is the activity of electronically buying or selling products on online services or over the Internet. E-commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. E-commerce is the largest sector of the electronics industry and is in turn driven by the technological advances of the semiconductor industry.", "title": "" }, { "paragraph_id": 1, "text": "The term was coined and first employed by Robert Jacobson, Principal Consultant to the California State Assembly's Utilities & Commerce Committee, in the title and text of California's Electronic Commerce Act, carried by the late Committee Chairwoman Gwen Moore (D-L.A.) and enacted in 1984.", "title": "Defining e-commerce" }, { "paragraph_id": 2, "text": "E-commerce typically uses the web for at least a part of a transaction's life cycle although it may also use other technologies such as e-mail. Typical e-commerce transactions include the purchase of products (such as books from Amazon) or services (such as music downloads in the form of digital distribution such as the iTunes Store). There are three areas of e-commerce: online retailing, electronic markets, and online auctions. E-commerce is supported by electronic business. The existence value of e-commerce is to allow consumers to shop online and pay online through the Internet, saving the time and space of customers and enterprises, greatly improving transaction efficiency, especially for busy office workers, and also saving a lot of valuable time.", "title": "Defining e-commerce" }, { "paragraph_id": 3, "text": "E-commerce businesses may also employ some or all of the following:", "title": "Defining e-commerce" }, { "paragraph_id": 4, "text": "There are five essential categories of E-commerce:", "title": "Defining e-commerce" }, { "paragraph_id": 5, "text": "Contemporary electronic commerce can be classified into two categories. The first category is business based on types of goods sold (involves everything from ordering \"digital\" content for immediate online consumption, to ordering conventional goods and services, to \"meta\" services to facilitate other types of electronic commerce). The second category is based on the nature of the participant (B2B, B2C, C2B and C2C).", "title": "Forms" }, { "paragraph_id": 6, "text": "On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are pressing issues for electronic commerce.", "title": "Forms" }, { "paragraph_id": 7, "text": "Aside from traditional e-commerce, the terms m-Commerce (mobile commerce) as well (around 2013) t-Commerce have also been used.", "title": "Forms" }, { "paragraph_id": 8, "text": "In the United States, California's Electronic Commerce Act (1984), enacted by the Legislature, the more recent California Privacy Rights Act (2020), enacted through a popular election proposition and to control specifically how electronic commerce may be conducted in California. In the US in its entirety, electronic commerce activities are regulated more broadly by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive. Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers' personal information. As a result, any corporate privacy policy related to e-commerce activity may be subject to enforcement by the FTC.", "title": "Governmental regulation" }, { "paragraph_id": 9, "text": "The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies.", "title": "Governmental regulation" }, { "paragraph_id": 10, "text": "Conflict of laws in cyberspace is a major hurdle for harmonization of legal framework for e-commerce around the world. In order to give a uniformity to e-commerce law around the world, many countries adopted the UNCITRAL Model Law on Electronic Commerce (1996).", "title": "Governmental regulation" }, { "paragraph_id": 11, "text": "Internationally there is the International Consumer Protection and Enforcement Network (ICPEN), which was formed in 1991 from an informal network of government customer fair trade organisations. The purpose was stated as being to find ways of co-operating on tackling consumer problems connected with cross-border transactions in both goods and services, and to help ensure exchanges of information among the participants for mutual benefit and understanding. From this came Econsumer.gov, an ICPEN initiative since April 2001. It is a portal to report complaints about online and related transactions with foreign companies.", "title": "Governmental regulation" }, { "paragraph_id": 12, "text": "There is also Asia Pacific Economic Cooperation. APEC was established in 1989 with the vision of achieving stability, security and prosperity for the region through free and open trade and investment. APEC has an Electronic Commerce Steering Group as well as working on common privacy regulations throughout the APEC region.", "title": "Governmental regulation" }, { "paragraph_id": 13, "text": "In Australia, trade is covered under Australian Treasury Guidelines for electronic commerce and the Australian Competition & Consumer Commission regulates and offers advice on how to deal with businesses online, and offers specific advice on what happens if things go wrong.", "title": "Governmental regulation" }, { "paragraph_id": 14, "text": "The European Union undertook an extensive enquiry into e-commerce in 2015-16 which observed significant growth in the development of e-commerce, along with some developments which raised concerns, such as increased use of selective distribution systems, which allow manufacturers to control routes to market, and \"increased use of contractual restrictions to better control product distribution\". The European Commission felt that some emerging practices might be justified if they could improve the quality of product distribution, but \"others may unduly prevent consumers from benefiting from greater product choice and lower prices in e-commerce and therefore warrant Commission action\" in order to promote compliance with EU competition rules.", "title": "Governmental regulation" }, { "paragraph_id": 15, "text": "In the United Kingdom, the Financial Services Authority (FSA) was formerly the regulating authority for most aspects of the EU's Payment Services Directive (PSD), until its replacement in 2013 by the Prudential Regulation Authority and the Financial Conduct Authority. The UK implemented the PSD through the Payment Services Regulations 2009 (PSRs), which came into effect on 1 November 2009. The PSR affects firms providing payment services and their customers. These firms include banks, non-bank credit card issuers and non-bank merchant acquirers, e-money issuers, etc. The PSRs created a new class of regulated firms known as payment institutions (PIs), who are subject to prudential requirements. Article 87 of the PSD requires the European Commission to report on the implementation and impact of the PSD by 1 November 2012.", "title": "Governmental regulation" }, { "paragraph_id": 16, "text": "In India, the Information Technology Act 2000 governs the basic applicability of e-commerce.", "title": "Governmental regulation" }, { "paragraph_id": 17, "text": "In China, the Telecommunications Regulations of the People's Republic of China (promulgated on 25 September 2000), stipulated the Ministry of Industry and Information Technology (MIIT) as the government department regulating all telecommunications related activities, including electronic commerce. On the same day, the Administrative Measures on Internet Information Services were released, the first administrative regulations to address profit-generating activities conducted through the Internet, and lay the foundation for future regulations governing e-commerce in China. On 28 August 2004, the eleventh session of the tenth NPC Standing Committee adopted an Electronic Signature Law, which regulates data message, electronic signature authentication and legal liability issues. It is considered the first law in China's e-commerce legislation. It was a milestone in the course of improving China's electronic commerce legislation, and also marks the entering of China's rapid development stage for electronic commerce legislation.", "title": "Governmental regulation" }, { "paragraph_id": 18, "text": "E-commerce has become an important tool for small and large businesses worldwide, not only to sell to customers, but also to engage them.", "title": "Global trends" }, { "paragraph_id": 19, "text": "Cross-border e-Commerce is also an essential field for e-Commerce businesses. It has responded to the trend of globalization. It shows that numerous firms have opened up new businesses, expanded new markets, and overcome trade barriers; more and more enterprises have started exploring the cross-border cooperation field. In addition, compared with traditional cross-border trade, the information on cross-border e-commerce is more concealed. In the era of globalization, cross-border e-commerce for inter-firm companies means the activities, interactions, or social relations of two or more e-commerce enterprises. However, the success of cross-border e-commerce promotes the development of small and medium-sized firms, and it has finally become a new transaction mode. It has helped the companies solve financial problems and realize the reasonable allocation of resources field. SMEs ( small and medium enterprises) can also precisely match the demand and supply in the market, having the industrial chain majorization and creating more revenues for companies.", "title": "Global trends" }, { "paragraph_id": 20, "text": "In 2012, e-commerce sales topped $1 trillion for the first time in history.", "title": "Global trends" }, { "paragraph_id": 21, "text": "Mobile devices are playing an increasing role in the mix of e-commerce, this is also commonly called mobile commerce, or m-commerce. In 2014, one estimate saw purchases made on mobile devices making up 25% of the market by 2017.", "title": "Global trends" }, { "paragraph_id": 22, "text": "For traditional businesses, one research stated that information technology and cross-border e-commerce is a good opportunity for the rapid development and growth of enterprises. Many companies have invested an enormous volume of investment in mobile applications. The DeLone and McLean Model stated that three perspectives contribute to a successful e-business: information system quality, service quality and users' satisfaction. There is no limit of time and space, there are more opportunities to reach out to customers around the world, and to cut down unnecessary intermediate links, thereby reducing the cost price, and can benefit from one on one large customer data analysis, to achieve a high degree of personal customization strategic plan, in order to fully enhance the core competitiveness of the products in the company.", "title": "Global trends" }, { "paragraph_id": 23, "text": "Modern 3D graphics technologies, such as Facebook 3D Posts, are considered by some social media marketers and advertisers as a preferable way to promote consumer goods than static photos, and some brands like Sony are already paving the way for augmented reality commerce. Wayfair now lets you inspect a 3D version of its furniture in a home setting before buying.", "title": "Global trends" }, { "paragraph_id": 24, "text": "Among emerging economies, China's e-commerce presence continues to expand every year. With 668 million Internet users, China's online shopping sales reached $253 billion in the first half of 2015, accounting for 10% of total Chinese consumer retail sales in that period. The Chinese retailers have been able to help consumers feel more comfortable shopping online. e-commerce transactions between China and other countries increased 32% to 2.3 trillion yuan ($375.8 billion) in 2012 and accounted for 9.6% of China's total international trade. In 2013, Alibaba had an e-commerce market share of 80% in China. In 2014, Alibaba still dominated the B2B marketplace in China with a market share of 44.82%, followed by several other companies including Made-in-China.com at 3.21%, and GlobalSources.com at 2.98%, with the total transaction value of China's B2B market exceeding 4.5 billion yuan. In 2014, there were 600 million Internet users in China (twice as many as in the US), making it the world's biggest online market.", "title": "Global trends" }, { "paragraph_id": 25, "text": "China is also the largest e-commerce market in the world by value of sales, with an estimated US$899 billion in 2016. It accounted for 42.4% of worldwide retail e-commerce in that year, the most of any country. Research shows that Chinese consumer motivations are different enough from Western audiences to require unique e-commerce app designs instead of simply porting Western apps into the Chinese market.", "title": "Global trends" }, { "paragraph_id": 26, "text": "The expansion of e-commerce in China has resulted in the development of Taobao villages, clusters of e-commerce businesses operating in rural areas. Because Taobao villages have increased the incomes or rural people and entrepreneurship in rural China, Taobao villages have become a component of rural revitalization strategies.", "title": "Global trends" }, { "paragraph_id": 27, "text": "In 2010, the United Kingdom had the highest per capita e-commerce spending in the world. As of 2013, the Czech Republic was the European country where e-commerce delivers the biggest contribution to the enterprises' total revenue. Almost a quarter (24%) of the country's total turnover is generated via the online channel.", "title": "Global trends" }, { "paragraph_id": 28, "text": "The rate of growth of the number of internet users in the Arab countries has been rapid – 13.1% in 2015. A significant portion of the e-commerce market in the Middle East comprises people in the 30–34 year age group. Egypt has the largest number of internet users in the region, followed by Saudi Arabia and Morocco; these constitute 3/4th of the region's share. Yet, internet penetration is low: 35% in Egypt and 65% in Saudi Arabia.", "title": "Global trends" }, { "paragraph_id": 29, "text": "The Gulf Cooperation Council countries have a rapidly growing market and are characterized by a population that becomes wealthier (Yuldashev). As such, retailers have launched Arabic-language websites as a means to target this population. Secondly, there are predictions of increased mobile purchases and an expanding internet audience (Yuldashev). The growth and development of the two aspects make the GCC countries become larger players in the electronic commerce market with time progress. Specifically, research shows that the e-commerce market is expected to grow to over $20 billion by 2020 among these GCC countries (Yuldashev). The e-commerce market has also gained much popularity among western countries, and in particular Europe and the U.S. These countries have been highly characterized by consumer-packaged goods (CPG) (Geisler, 34). However, trends show that there are future signs of a reverse. Similar to the GCC countries, there has been increased purchase of goods and services in online channels rather than offline channels. Activist investors are trying hard to consolidate and slash their overall cost and the governments in western countries continue to impose more regulation on CPG manufacturers (Geisler, 36). In these senses, CPG investors are being forced to adapt to e-commerce as it is effective as well as a means for them to thrive.", "title": "Global trends" }, { "paragraph_id": 30, "text": "The future trends in the GCC countries will be similar to that of the western countries. Despite the forces that push business to adapt e-commerce as a means to sell goods and products, the manner in which customers make purchases is similar in countries from these two regions. For instance, there has been an increased usage of smartphones which comes in conjunction with an increase in the overall internet audience from the regions. Yuldashev writes that consumers are scaling up to more modern technology that allows for mobile marketing. However, the percentage of smartphone and internet users who make online purchases is expected to vary in the first few years. It will be independent on the willingness of the people to adopt this new trend (The Statistics Portal). For example, UAE has the greatest smartphone penetration of 73.8 per cent and has 91.9 per cent of its population has access to the internet. On the other hand, smartphone penetration in Europe has been reported to be at 64.7 per cent (The Statistics Portal). Regardless, the disparity in percentage between these regions is expected to level out in future because e-commerce technology is expected to grow to allow for more users.", "title": "Global trends" }, { "paragraph_id": 31, "text": "The e-commerce business within these two regions will result in competition. Government bodies at the country level will enhance their measures and strategies to ensure sustainability and consumer protection (Krings, et al.). These increased measures will raise the environmental and social standards in the countries, factors that will determine the success of the e-commerce market in these countries. For example, an adoption of tough sanctions will make it difficult for companies to enter the e-commerce market while lenient sanctions will allow ease of companies. As such, the future trends between GCC countries and the Western countries will be independent of these sanctions (Krings, et al.). These countries need to make rational conclusions in coming up with effective sanctions.", "title": "Global trends" }, { "paragraph_id": 32, "text": "India has an Internet user base of about 460 million as of December 2017. Despite being the third largest user base in the world, the penetration of the Internet is low compared to markets like the United States, United Kingdom or France but is growing at a much faster rate, adding around six million new entrants every month. In India, cash on delivery is the most preferred payment method, accumulating 75% of the e-retail activities. The India retail market is expected to rise from 2.5% in 2016 to 5% in 2020.", "title": "Global trends" }, { "paragraph_id": 33, "text": "In 2013, Brazil's e-commerce was growing quickly with retail e-commerce sales expected to grow at a double-digit pace through 2014. By 2016, eMarketer expected retail e-commerce sales in Brazil to reach $17.3 billion.", "title": "Global trends" }, { "paragraph_id": 34, "text": "Logistics in e-commerce mainly concerns fulfillment. Online markets and retailers have to find the best possible way to fill orders and deliver products. Small companies usually control their own logistic operation because they do not have the ability to hire an outside company. Most large companies hire a fulfillment service that takes care of a company's logistic needs. The optimization of logistics processes that contains long-term investment in an efficient storage infrastructure system and adoption of inventory management strategies is crucial to prioritize customer satisfaction throughout the entire process, from order placement to final delivery.", "title": "Logistics" }, { "paragraph_id": 35, "text": "E-commerce markets are growing at noticeable rates. The online market is expected to grow by 56% in 2015–2020. In 2017, retail e-commerce sales worldwide amounted to 2.3 trillion US dollars and e-retail revenues are projected to grow to 4.891 trillion US dollars in 2021. Traditional markets are only expected 2% growth during the same time. Brick and mortar retailers are struggling because of online retailer's ability to offer lower prices and higher efficiency. Many larger retailers are able to maintain a presence offline and online by linking physical and online offerings.", "title": "Impacts" }, { "paragraph_id": 36, "text": "E-commerce allows customers to overcome geographical barriers and allows them to purchase products anytime and from anywhere. Online and traditional markets have different strategies for conducting business. Traditional retailers offer fewer assortment of products because of shelf space where, online retailers often hold no inventory but send customer orders directly to the manufacturer. The pricing strategies are also different for traditional and online retailers. Traditional retailers base their prices on store traffic and the cost to keep inventory. Online retailers base prices on the speed of delivery.", "title": "Impacts" }, { "paragraph_id": 37, "text": "There are two ways for marketers to conduct business through e-commerce: fully online or online along with a brick and mortar store. Online marketers can offer lower prices, greater product selection, and high efficiency rates. Many customers prefer online markets if the products can be delivered quickly at relatively low price. However, online retailers cannot offer the physical experience that traditional retailers can. It can be difficult to judge the quality of a product without the physical experience, which may cause customers to experience product or seller uncertainty. Another issue regarding the online market is concerns about the security of online transactions. Many customers remain loyal to well-known retailers because of this issue.", "title": "Impacts" }, { "paragraph_id": 38, "text": "Security is a primary problem for e-commerce in developed and developing countries. E-commerce security is protecting businesses' websites and customers from unauthorized access, use, alteration, or destruction. The type of threats include: malicious codes, unwanted programs (ad ware, spyware), phishing, hacking, and cyber vandalism. E-commerce websites use different tools to avert security threats. These tools include firewalls, encryption software, digital certificates, and passwords.", "title": "Impacts" }, { "paragraph_id": 39, "text": "For a long time, companies had been troubled by the gap between the benefits which supply chain technology has and the solutions to deliver those benefits. However, the emergence of e-commerce has provided a more practical and effective way of delivering the benefits of the new supply chain technologies.", "title": "Impacts" }, { "paragraph_id": 40, "text": "E-commerce has the capability to integrate all inter-company and intra-company functions, meaning that the three flows (physical flow, financial flow and information flow) of the supply chain could be also affected by e-commerce. The affections on physical flows improved the way of product and inventory movement level for companies. For the information flows, e-commerce optimized the capacity of information processing than companies used to have, and for the financial flows, e-commerce allows companies to have more efficient payment and settlement solutions.", "title": "Impacts" }, { "paragraph_id": 41, "text": "In addition, e-commerce has a more sophisticated level of impact on supply chains: Firstly, the performance gap will be eliminated since companies can identify gaps between different levels of supply chains by electronic means of solutions; Secondly, as a result of e-commerce emergence, new capabilities such implementing ERP systems, like SAP ERP, Xero, or Megaventory, have helped companies to manage operations with customers and suppliers. Yet these new capabilities are still not fully exploited. Thirdly, technology companies would keep investing on new e-commerce software solutions as they are expecting investment return. Fourthly, e-commerce would help to solve many aspects of issues that companies may feel difficult to cope with, such as political barriers or cross-country changes. Finally, e-commerce provides companies a more efficient and effective way to collaborate with each other within the supply chain.", "title": "Impacts" }, { "paragraph_id": 42, "text": "E-commerce helps create new job opportunities due to information related services, software app and digital products. It also causes job losses. The areas with the greatest predicted job-loss are retail, postal, and travel agencies. The development of e-commerce will create jobs that require highly skilled workers to manage large amounts of information, customer demands, and production processes. In contrast, people with poor technical skills cannot enjoy the wages welfare. On the other hand, because e-commerce requires sufficient stocks that could be delivered to customers in time, the warehouse becomes an important element. Warehouse needs more staff to manage, supervise and organize, thus the condition of warehouse environment will be concerned by employees.", "title": "Impacts" }, { "paragraph_id": 43, "text": "E-commerce brings convenience for customers as they do not have to leave home and only need to browse websites online, especially for buying products which are not sold in nearby shops. It could help customers buy a wider range of products and save customers' time. Consumers also gain power through online shopping. They are able to research products and compare prices among retailers. Thanks to the practice of user-generated ratings and reviews from companies like Bazaarvoice, Trustpilot, and Yelp, customers can also see what other people think of a product, and decide before buying if they want to spend money on it. Also, online shopping often provides sales promotion or discounts code, thus it is more price effective for customers. Moreover, e-commerce provides products' detailed information; even the in-store staff cannot offer such detailed explanation. Customers can also review and track the order history online.", "title": "Impacts" }, { "paragraph_id": 44, "text": "E-commerce technologies cut transaction costs by allowing both manufactures and consumers to skip through the intermediaries. This is achieved through by extending the search area best price deals and by group purchase. The success of e-commerce in urban and regional levels depend on how the local firms and consumers have adopted to e-commerce.", "title": "Impacts" }, { "paragraph_id": 45, "text": "However, e-commerce lacks human interaction for customers, especially who prefer face-to-face connection. Customers are also concerned with the security of online transactions and tend to remain loyal to well-known retailers. In recent years, clothing retailers such as Tommy Hilfiger have started adding Virtual Fit platforms to their e-commerce sites to reduce the risk of customers buying the wrong sized clothes, although these vary greatly in their fit for purpose. When the customer regret the purchase of a product, it involves returning goods and refunding process. This process is inconvenient as customers need to pack and post the goods. If the products are expensive, large or fragile, it refers to safety issues.", "title": "Impacts" }, { "paragraph_id": 46, "text": "In 2018, E-commerce generated 1.3 million short tons (1.2 megatonnes) of container cardboard in North America, an increase from 1.1 million (1.00)) in 2017. Only 35 percent of North American cardboard manufacturing capacity is from recycled content. The recycling rate in Europe is 80 percent and Asia is 93 percent. Amazon, the largest user of boxes, has a strategy to cut back on packing material and has reduced packaging material used by 19 percent by weight since 2016. Amazon is requiring retailers to manufacture their product packaging in a way that does not require additional shipping packaging. Amazon also has an 85-person team researching ways to reduce and improve their packaging and shipping materials.", "title": "Impacts" }, { "paragraph_id": 47, "text": "Accelerated movement of packages around the world includes accelerated movement of living things, with all its attendant risks. Weeds, pests, and diseases all sometimes travel in packages of seeds. Some of these packages are part of brushing manipulation of e-commerce reviews.", "title": "Impacts" }, { "paragraph_id": 48, "text": "E-commerce has been cited as a major force for the failure of major U.S. retailers in a trend frequently referred to as a \"retail apocalypse.\" The rise of e-commerce outlets like Amazon has made it harder for traditional retailers to attract customers to their stores and forced companies to change their sales strategies. Many companies have turned to sales promotions and increased digital efforts to lure shoppers while shutting down brick-and-mortar locations. The trend has forced some traditional retailers to shutter its brick and mortar operations.", "title": "Impacts" }, { "paragraph_id": 49, "text": "In March 2020, global retail website traffic hit 14.3 billion visits signifying an unprecedented growth of e-commerce during the lockdown of 2020. Later studies show that online sales increased by 25% and online grocery shopping increased by over 100% during the crisis in the United States. Meanwhile, as many as 29% of surveyed shoppers state that they will never go back to shopping in person again; in the UK, 43% of consumers state that they expect to keep on shopping the same way even after the lockdown is over.", "title": "E-commerce during COVID-19" }, { "paragraph_id": 50, "text": "Retail sales of e-commerce shows that COVID-19 has a significant impact on e-commerce and its sales are expected to reach $6.5 trillion by 2023.", "title": "E-commerce during COVID-19" }, { "paragraph_id": 51, "text": "Some common applications related to electronic commerce are:", "title": "Business application" }, { "paragraph_id": 52, "text": "A timeline for the development of e-commerce:", "title": "Timeline" } ]
E-commerce is the activity of electronically buying or selling products on online services or over the Internet. E-commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. E-commerce is the largest sector of the electronics industry and is in turn driven by the technological advances of the semiconductor industry.
2001-07-28T00:19:35Z
2023-12-29T23:03:26Z
[ "Template:Div col", "Template:Cite news", "Template:Webarchive", "Template:Refbegin", "Template:Citation", "Template:Authority control", "Template:US$", "Template:Further", "Template:Cite book", "Template:Refend", "Template:USD", "Template:Rp", "Template:Main article", "Template:Citation needed", "Template:Cite web", "Template:Reflist", "Template:Cite magazine", "Template:Ecommerce", "Template:Use dmy dates", "Template:Convert", "Template:Div col end", "Template:Sister project links", "Template:Computer science", "Template:Short description", "Template:Main", "Template:Columns-list", "Template:Cite journal", "Template:Pp" ]
https://en.wikipedia.org/wiki/E-commerce
9,613
Euler's formula
Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number x, one has where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively. This complex exponential function is sometimes denoted cis x ("cosine plus i sine"). The formula is still valid if x is a complex number, and is also called Euler's formula in this more general case. Euler's formula is ubiquitous in mathematics, physics, chemistry, and engineering. The physicist Richard Feynman called the equation "our jewel" and "the most remarkable formula in mathematics". When x = π, Euler's formula may be rewritten as e + 1 = 0 or e = -1, which is known as Euler's identity. In 1714, the English mathematician Roger Cotes presented a geometrical argument that can be interpreted (after correcting a misplaced factor of − 1 {\displaystyle {\sqrt {-1}}} ) as: Exponentiating this equation yields Euler's formula. Note that the logarithmic statement is not universally correct for complex numbers, since a complex logarithm can have infinitely many values, differing by multiples of 2πi. Around 1740 Leonhard Euler turned his attention to the exponential function and derived the equation named after him by comparing the series expansions of the exponential and trigonometric expressions. The formula was first published in 1748 in his foundational work Introductio in analysin infinitorum. Johann Bernoulli had found that And since the above equation tells us something about complex logarithms by relating natural logarithms to imaginary (complex) numbers. Bernoulli, however, did not evaluate the integral. Bernoulli's correspondence with Euler (who also knew the above equation) shows that Bernoulli did not fully understand complex logarithms. Euler also suggested that complex logarithms can have infinitely many values. The view of complex numbers as points in the complex plane was described about 50 years later by Caspar Wessel. The exponential function e for real values of x may be defined in a few different equivalent ways (see Characterizations of the exponential function). Several of these methods may be directly extended to give definitions of e for complex values of z simply by substituting z in place of x and using the complex algebraic operations. In particular, we may use any of the three following definitions, which are equivalent. From a more advanced perspective, each of these definitions may be interpreted as giving the unique analytic continuation of e to the complex plane. The exponential function f ( z ) = e z {\displaystyle f(z)=e^{z}} is the unique differentiable function of a complex variable for which the derivative equals the function and For complex z Using the ratio test, it is possible to show that this power series has an infinite radius of convergence and so defines e for all complex z. For complex z Here, n is restricted to positive integers, so there is no question about what the power with exponent n means. Various proofs of the formula are possible. This proof shows that the quotient of the trigonometric and exponential expressions is the constant function one, so they must be equal (the exponential function is never zero, so this is permitted). Consider the function f(θ) for real θ. Differentiating gives by the product rule Thus, f(θ) is a constant. Since f(0) = 1, then f(θ) = 1 for all real θ, and thus Here is a proof of Euler's formula using power-series expansions, as well as basic facts about the powers of i: Using now the power-series definition from above, we see that for real values of x where in the last step we recognize the two terms are the Maclaurin series for cos x and sin x. The rearrangement of terms is justified because each series is absolutely convergent. Another proof is based on the fact that all complex numbers can be expressed in polar coordinates. Therefore, for some r and θ depending on x, No assumptions are being made about r and θ; they will be determined in the course of the proof. From any of the definitions of the exponential function it can be shown that the derivative of e is ie. Therefore, differentiating both sides gives Substituting r(cos θ + i sin θ) for e and equating real and imaginary parts in this formula gives dr/dx = 0 and dθ/dx = 1. Thus, r is a constant, and θ is x + C for some constant C. The initial values r(0) = 1 and θ(0) = 0 come from e = 1, giving r = 1 and θ = x. This proves the formula This formula can be interpreted as saying that the function e is a unit complex number, i.e., it traces out the unit circle in the complex plane as φ ranges through the real numbers. Here φ is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counterclockwise and in radians. The original proof is based on the Taylor series expansions of the exponential function e (where z is a complex number) and of sin x and cos x for real numbers x (see above). In fact, the same proof shows that Euler's formula is even valid for all complex numbers x. A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number z = x + iy, and its complex conjugate, z = x − iy, can be written as where φ is the argument of z, i.e., the angle between the x axis and the vector z measured counterclockwise in radians, which is defined up to addition of 2π. Many texts write φ = tan y/x instead of φ = atan2(y, x), but the first equation needs adjustment when x ≤ 0. This is because for any real x and y, not both zero, the angles of the vectors (x, y) and (−x, −y) differ by π radians, but have the identical value of tan φ = y/x. Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation): and that both valid for any complex numbers a and b. Therefore, one can write: for any z ≠ 0. Taking the logarithm of both sides shows that and in fact, this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, because φ is multi-valued. Finally, the other exponential law which can be seen to hold for all integers k, together with Euler's formula, implies several trigonometric identities, as well as de Moivre's formula. Euler's formula, the definitions of the trigonometric functions and the standard identities for exponentials are sufficient to easily derive most trigonometric identities. It provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function: The two equations above can be derived by adding or subtracting Euler's formulas: and solving for either cosine or sine. These formulas can even serve as the definition of the trigonometric functions for complex arguments x. For example, letting x = iy, we have: Complex exponentials can simplify trigonometry, because they are easier to manipulate than their sinusoidal components. One technique is simply to convert sinusoids into equivalent expressions in terms of exponentials. After the manipulations, the simplified result is still real-valued. For example: Another technique is to represent the sinusoids in terms of the real part of a complex expression and perform the manipulations on the complex expression. For example: This formula is used for recursive generation of cos nx for integer values of n and arbitrary x (in radians). In the language of topology, Euler's formula states that the imaginary exponential function t ↦ e i t {\displaystyle t\mapsto e^{it}} is a (surjective) morphism of topological groups from the real line R {\displaystyle \mathbb {R} } to the unit circle S 1 {\displaystyle \mathbb {S} ^{1}} . In fact, this exhibits R {\displaystyle \mathbb {R} } as a covering space of S 1 {\displaystyle \mathbb {S} ^{1}} . Similarly, Euler's identity says that the kernel of this map is τ Z {\displaystyle \tau \mathbb {Z} } , where τ = 2 π {\displaystyle \tau =2\pi } . These observations may be combined and summarized in the commutative diagram below: In differential equations, the function e is often used to simplify solutions, even if the final answer is a real function involving sine and cosine. The reason for this is that the exponential function is the eigenfunction of the operation of differentiation. In electrical engineering, signal processing, and similar fields, signals that vary periodically over time are often described as a combination of sinusoidal functions (see Fourier analysis), and these are more conveniently expressed as the sum of exponential functions with imaginary exponents, using Euler's formula. Also, phasor analysis of circuits can include Euler's formula to represent the impedance of a capacitor or an inductor. In the four-dimensional space of quaternions, there is a sphere of imaginary units. For any point r on this sphere, and x a real number, Euler's formula applies: and the element is called a versor in quaternions. The set of all versors forms a 3-sphere in the 4-space.
[ { "paragraph_id": 0, "text": "Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number x, one has", "title": "" }, { "paragraph_id": 1, "text": "where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively. This complex exponential function is sometimes denoted cis x (\"cosine plus i sine\"). The formula is still valid if x is a complex number, and is also called Euler's formula in this more general case.", "title": "" }, { "paragraph_id": 2, "text": "Euler's formula is ubiquitous in mathematics, physics, chemistry, and engineering. The physicist Richard Feynman called the equation \"our jewel\" and \"the most remarkable formula in mathematics\".", "title": "" }, { "paragraph_id": 3, "text": "When x = π, Euler's formula may be rewritten as e + 1 = 0 or e = -1, which is known as Euler's identity.", "title": "" }, { "paragraph_id": 4, "text": "In 1714, the English mathematician Roger Cotes presented a geometrical argument that can be interpreted (after correcting a misplaced factor of − 1 {\\displaystyle {\\sqrt {-1}}} ) as:", "title": "History" }, { "paragraph_id": 5, "text": "Exponentiating this equation yields Euler's formula. Note that the logarithmic statement is not universally correct for complex numbers, since a complex logarithm can have infinitely many values, differing by multiples of 2πi.", "title": "History" }, { "paragraph_id": 6, "text": "Around 1740 Leonhard Euler turned his attention to the exponential function and derived the equation named after him by comparing the series expansions of the exponential and trigonometric expressions. The formula was first published in 1748 in his foundational work Introductio in analysin infinitorum.", "title": "History" }, { "paragraph_id": 7, "text": "Johann Bernoulli had found that", "title": "History" }, { "paragraph_id": 8, "text": "And since", "title": "History" }, { "paragraph_id": 9, "text": "the above equation tells us something about complex logarithms by relating natural logarithms to imaginary (complex) numbers. Bernoulli, however, did not evaluate the integral.", "title": "History" }, { "paragraph_id": 10, "text": "Bernoulli's correspondence with Euler (who also knew the above equation) shows that Bernoulli did not fully understand complex logarithms. Euler also suggested that complex logarithms can have infinitely many values.", "title": "History" }, { "paragraph_id": 11, "text": "The view of complex numbers as points in the complex plane was described about 50 years later by Caspar Wessel.", "title": "History" }, { "paragraph_id": 12, "text": "The exponential function e for real values of x may be defined in a few different equivalent ways (see Characterizations of the exponential function). Several of these methods may be directly extended to give definitions of e for complex values of z simply by substituting z in place of x and using the complex algebraic operations. In particular, we may use any of the three following definitions, which are equivalent. From a more advanced perspective, each of these definitions may be interpreted as giving the unique analytic continuation of e to the complex plane.", "title": "Definitions of complex exponentiation" }, { "paragraph_id": 13, "text": "The exponential function f ( z ) = e z {\\displaystyle f(z)=e^{z}} is the unique differentiable function of a complex variable for which the derivative equals the function", "title": "Definitions of complex exponentiation" }, { "paragraph_id": 14, "text": "and", "title": "Definitions of complex exponentiation" }, { "paragraph_id": 15, "text": "For complex z", "title": "Definitions of complex exponentiation" }, { "paragraph_id": 16, "text": "Using the ratio test, it is possible to show that this power series has an infinite radius of convergence and so defines e for all complex z.", "title": "Definitions of complex exponentiation" }, { "paragraph_id": 17, "text": "For complex z", "title": "Definitions of complex exponentiation" }, { "paragraph_id": 18, "text": "Here, n is restricted to positive integers, so there is no question about what the power with exponent n means.", "title": "Definitions of complex exponentiation" }, { "paragraph_id": 19, "text": "Various proofs of the formula are possible.", "title": "Proofs" }, { "paragraph_id": 20, "text": "This proof shows that the quotient of the trigonometric and exponential expressions is the constant function one, so they must be equal (the exponential function is never zero, so this is permitted).", "title": "Proofs" }, { "paragraph_id": 21, "text": "Consider the function f(θ)", "title": "Proofs" }, { "paragraph_id": 22, "text": "for real θ. Differentiating gives by the product rule", "title": "Proofs" }, { "paragraph_id": 23, "text": "Thus, f(θ) is a constant. Since f(0) = 1, then f(θ) = 1 for all real θ, and thus", "title": "Proofs" }, { "paragraph_id": 24, "text": "Here is a proof of Euler's formula using power-series expansions, as well as basic facts about the powers of i:", "title": "Proofs" }, { "paragraph_id": 25, "text": "Using now the power-series definition from above, we see that for real values of x", "title": "Proofs" }, { "paragraph_id": 26, "text": "where in the last step we recognize the two terms are the Maclaurin series for cos x and sin x. The rearrangement of terms is justified because each series is absolutely convergent.", "title": "Proofs" }, { "paragraph_id": 27, "text": "Another proof is based on the fact that all complex numbers can be expressed in polar coordinates. Therefore, for some r and θ depending on x,", "title": "Proofs" }, { "paragraph_id": 28, "text": "No assumptions are being made about r and θ; they will be determined in the course of the proof. From any of the definitions of the exponential function it can be shown that the derivative of e is ie. Therefore, differentiating both sides gives", "title": "Proofs" }, { "paragraph_id": 29, "text": "Substituting r(cos θ + i sin θ) for e and equating real and imaginary parts in this formula gives dr/dx = 0 and dθ/dx = 1. Thus, r is a constant, and θ is x + C for some constant C. The initial values r(0) = 1 and θ(0) = 0 come from e = 1, giving r = 1 and θ = x. This proves the formula", "title": "Proofs" }, { "paragraph_id": 30, "text": "This formula can be interpreted as saying that the function e is a unit complex number, i.e., it traces out the unit circle in the complex plane as φ ranges through the real numbers. Here φ is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counterclockwise and in radians.", "title": "Applications" }, { "paragraph_id": 31, "text": "The original proof is based on the Taylor series expansions of the exponential function e (where z is a complex number) and of sin x and cos x for real numbers x (see above). In fact, the same proof shows that Euler's formula is even valid for all complex numbers x.", "title": "Applications" }, { "paragraph_id": 32, "text": "A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number z = x + iy, and its complex conjugate, z = x − iy, can be written as", "title": "Applications" }, { "paragraph_id": 33, "text": "where", "title": "Applications" }, { "paragraph_id": 34, "text": "φ is the argument of z, i.e., the angle between the x axis and the vector z measured counterclockwise in radians, which is defined up to addition of 2π. Many texts write φ = tan y/x instead of φ = atan2(y, x), but the first equation needs adjustment when x ≤ 0. This is because for any real x and y, not both zero, the angles of the vectors (x, y) and (−x, −y) differ by π radians, but have the identical value of tan φ = y/x.", "title": "Applications" }, { "paragraph_id": 35, "text": "Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation):", "title": "Applications" }, { "paragraph_id": 36, "text": "and that", "title": "Applications" }, { "paragraph_id": 37, "text": "both valid for any complex numbers a and b. Therefore, one can write:", "title": "Applications" }, { "paragraph_id": 38, "text": "for any z ≠ 0. Taking the logarithm of both sides shows that", "title": "Applications" }, { "paragraph_id": 39, "text": "and in fact, this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, because φ is multi-valued.", "title": "Applications" }, { "paragraph_id": 40, "text": "Finally, the other exponential law", "title": "Applications" }, { "paragraph_id": 41, "text": "which can be seen to hold for all integers k, together with Euler's formula, implies several trigonometric identities, as well as de Moivre's formula.", "title": "Applications" }, { "paragraph_id": 42, "text": "Euler's formula, the definitions of the trigonometric functions and the standard identities for exponentials are sufficient to easily derive most trigonometric identities. It provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function:", "title": "Applications" }, { "paragraph_id": 43, "text": "The two equations above can be derived by adding or subtracting Euler's formulas:", "title": "Applications" }, { "paragraph_id": 44, "text": "and solving for either cosine or sine.", "title": "Applications" }, { "paragraph_id": 45, "text": "These formulas can even serve as the definition of the trigonometric functions for complex arguments x. For example, letting x = iy, we have:", "title": "Applications" }, { "paragraph_id": 46, "text": "Complex exponentials can simplify trigonometry, because they are easier to manipulate than their sinusoidal components. One technique is simply to convert sinusoids into equivalent expressions in terms of exponentials. After the manipulations, the simplified result is still real-valued. For example:", "title": "Applications" }, { "paragraph_id": 47, "text": "Another technique is to represent the sinusoids in terms of the real part of a complex expression and perform the manipulations on the complex expression. For example:", "title": "Applications" }, { "paragraph_id": 48, "text": "This formula is used for recursive generation of cos nx for integer values of n and arbitrary x (in radians).", "title": "Applications" }, { "paragraph_id": 49, "text": "In the language of topology, Euler's formula states that the imaginary exponential function t ↦ e i t {\\displaystyle t\\mapsto e^{it}} is a (surjective) morphism of topological groups from the real line R {\\displaystyle \\mathbb {R} } to the unit circle S 1 {\\displaystyle \\mathbb {S} ^{1}} . In fact, this exhibits R {\\displaystyle \\mathbb {R} } as a covering space of S 1 {\\displaystyle \\mathbb {S} ^{1}} . Similarly, Euler's identity says that the kernel of this map is τ Z {\\displaystyle \\tau \\mathbb {Z} } , where τ = 2 π {\\displaystyle \\tau =2\\pi } . These observations may be combined and summarized in the commutative diagram below:", "title": "Applications" }, { "paragraph_id": 50, "text": "In differential equations, the function e is often used to simplify solutions, even if the final answer is a real function involving sine and cosine. The reason for this is that the exponential function is the eigenfunction of the operation of differentiation.", "title": "Applications" }, { "paragraph_id": 51, "text": "In electrical engineering, signal processing, and similar fields, signals that vary periodically over time are often described as a combination of sinusoidal functions (see Fourier analysis), and these are more conveniently expressed as the sum of exponential functions with imaginary exponents, using Euler's formula. Also, phasor analysis of circuits can include Euler's formula to represent the impedance of a capacitor or an inductor.", "title": "Applications" }, { "paragraph_id": 52, "text": "In the four-dimensional space of quaternions, there is a sphere of imaginary units. For any point r on this sphere, and x a real number, Euler's formula applies:", "title": "Applications" }, { "paragraph_id": 53, "text": "and the element is called a versor in quaternions. The set of all versors forms a 3-sphere in the 4-space.", "title": "Applications" } ]
Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number x, one has where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively. This complex exponential function is sometimes denoted cis x. The formula is still valid if x is a complex number, and is also called Euler's formula in this more general case. Euler's formula is ubiquitous in mathematics, physics, chemistry, and engineering. The physicist Richard Feynman called the equation "our jewel" and "the most remarkable formula in mathematics". When x = π, Euler's formula may be rewritten as eiπ + 1 = 0 or eiπ = -1, which is known as Euler's identity.
2001-07-28T01:08:36Z
2023-12-30T19:57:44Z
[ "Template:About", "Template:Mvar", "Template:ISBN", "Template:Leonhard Euler", "Template:Use dmy dates", "Template:Math", "Template:Further", "Template:Pi", "Template:See also", "Template:Cite journal", "Template:Short description", "Template:E (mathematical constant)", "Template:Unreferenced section", "Template:Section link", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Euler%27s_formula
9,615
Édouard Manet
Édouard Manet (UK: /ˈmæneɪ/, US: /mæˈneɪ, məˈ-/; French: [edwaʁ manɛ]; 23 January 1832 – 30 April 1883) was a French modernist painter. He was one of the first 19th-century artists to paint modern life, as well as a pivotal figure in the transition from Realism to Impressionism. Born into an upper-class household with strong political connections, Manet rejected the naval career originally envisioned for him; he became engrossed in the world of painting. His early masterworks, The Luncheon on the Grass (Le déjeuner sur l'herbe) or Olympia, "premiering" in 1863 and '65, respectively, caused great controversy with both critics and the Academy of Fine Arts, but soon were praised by progressive artists as the breakthrough acts to the new style, Impressionism. Today too, these works, along with others, are considered watershed paintings that mark the start of modern art. The last 20 years of Manet's life saw him form bonds with other great artists of the time; he developed his own simple and direct style that would be heralded as innovative and serve as a major influence for future painters. Édouard Manet was born in Paris on 23 January 1832, in the ancestral hôtel particulier (mansion) on the Rue des Petits Augustins (now Rue Bonaparte) to an affluent and well-connected family. His mother, Eugénie-Desirée Fournier, was the daughter of a diplomat and goddaughter of the Swedish crown prince Charles Bernadotte, from whom the Swedish monarchs are descended. His father, Auguste Manet, was a French judge who expected Édouard to pursue a career in law. His uncle, Edmond Fournier, encouraged him to pursue painting and took young Manet to the Louvre. In 1844, he enrolled at secondary school, the Collège Rollin, where he boarded until 1848. He showed little academic talent and was generally unhappy at the school. In 1845, at the advice of his uncle, Manet enrolled in a special course of drawing where he met Antonin Proust, future Minister of Fine Arts and subsequent lifelong friend. At his father's suggestion, in 1848 he sailed on a training vessel to Rio de Janeiro. After he twice failed the examination to join the Navy, his father relented to his wishes to pursue an art education. From 1850 to 1856, Manet studied under the academic painter Thomas Couture. Couture encouraged his students to paint contemporary life, though he would eventually be horrified by Manet's choice of lower-class and "degenerate" subjects such as The Absinthe Drinker. In his spare time, Manet copied Old Masters such as Diego Velázquez and Titian in the Louvre. From 1853 to 1856, Manet made brief visits to Germany, Italy, and the Netherlands, during which time he was influenced by the Dutch painter Frans Hals and the Spanish artists Velázquez and Francisco José de Goya. In 1856, Manet opened a studio. His style in this period was characterized by loose brush strokes, simplification of details, and the suppression of transitional tones. Adopting the current style of realism initiated by Gustave Courbet, he painted The Absinthe Drinker (1858–59) and other contemporary subjects such as beggars, singers, Gypsies, people in cafés, and bullfights. After his early career, he rarely painted religious, mythological, or historical subjects; religious paintings from 1864 include his Jesus Mocked by the Soldiers and The Dead Christ with Angels. Manet had two canvases accepted at the Salon in 1861. A portrait of his mother and father (Portrait of M. and Mme Auguste Manet), the latter of whom at the time was paralysed by a stroke or advanced syphilis, was ill-received by critics. The other, The Spanish Singer, was admired by Théophile Gautier, and placed in a more conspicuous location as a result of its popularity with Salon-goers. Manet's work, which appeared "slightly slapdash" when compared with the meticulous style of so many other Salon paintings, intrigued some young artists and brought new business to his studio. According to one contemporary source, The Spanish Singer, painted in a "strange new fashion[,] caused many painters' eyes to open and their jaws to drop." In 1862, Manet exhibited Music in the Tuileries (probably painted in 1860), one of his first masterpieces. With its portrayal of a crowd of subjects at the Jardin des Tuileries, the painting shows the outdoor leisure of contemporary Paris, which would be a lifelong subject of Manet's. Among the figures in the gardens are the poet Charles Baudelaire, the musician Jacques Offenbach, and others of Manet's family and friends, including a self-portrait of the artist. Music in the Tuileries received substantial critical and public attention, most of it negative. In the words of one Manet biographer, "it is difficult for us to imagine the kind of fury Music in the Tuileries provoked when it was exhibited". By portraying Manet's social circle instead of classical heroes, historical icons, or gods, the painting could be interpreted as challenging the value of those subjects or as an attempt to elevate his contemporaries to the same level. The public, accustomed to the finely detailed brushwork of historical painters such as Ernest Meissonier, thought Manet's thick brushstrokes looked crude and unfinished. Angered by the subject matter and technique, several visitors even threatened to destroy the painting. One of Manet's idols, Eugène Delacroix, was of the painting's few defenders. Despite the largely negative reaction, the controversy made Manet a well-known name in Paris. Another major early work is The Luncheon on the Grass (Le Déjeuner sur l'herbe), originally Le Bain. The Paris Salon rejected it for exhibition in 1863, but Manet agreed to exhibit it at the Salon des Refusés (Salon of the Rejected). This parallel salon was initiated by Emperor Napoleon III as a solution to the public outcry after the official salon's Selection Committee only accepted 2217 paintings out of more than 5000 submissions, and allowed rejected artists to still display their paintings if they chose. The painting's juxtaposition of fully dressed men and a nude woman was controversial, as was its abbreviated, sketch-like handling, an innovation that distinguished Manet from Courbet. One critic stated that the brushwork appeared to have been done with a "floor mop". However, others such as his friend Antonin Proust celebrated the painting, and novelist Émile Zola was so affected by the experience of viewing it that he later based the title painting in his novel L'Œuvre ("The Work of Art") on Le Déjeuner sur l'herbe. At the same time, Manet's composition reveals his study of the old masters, as the disposition of the main figures is derived from Marcantonio Raimondi's engraving of the Judgement of Paris (c. 1515) based on a drawing by Raphael. Two additional works cited by scholars as important precedents for Le Déjeuner sur l'herbe are Pastoral Concert (c. 1510) and The Tempest, both of which are attributed variously to Italian Renaissance masters Giorgione or Titian. Le Déjeuner and James McNeill Whistler's Symphony in White, No. 1: The White Girl were the two most discussed works of the Salon des Refusés, which itself would become one of the most famous art exhibitions of all time. Following the Salon, Manet became yet more notorious and widely discussed. However, Le Déjeuner sur l'herbe and Manet's other paintings still failed to sell, and Manet continued living off of his inheritance from his recently deceased father. As he had in Luncheon on the Grass, Manet again paraphrased a respected work by a Renaissance artist in the painting Olympia (1863), a nude portrayed in a style reminiscent of early studio photographs, but whose pose was based on Titian's Venus of Urbino (1538). The painting is also reminiscent of Francisco Goya's painting The Nude Maja (1800). Manet embarked on the canvas after being challenged to give the Salon a nude painting to display. His uniquely frank depiction of a self-assured prostitute was accepted by the Paris Salon in 1865, where it created a scandal. According to Antonin Proust, "only the precautions taken by the administration prevented the painting being punctured and torn" by offended viewers. The painting was controversial partly because the nude is wearing some small items of clothing such as an orchid in her hair, a bracelet, a ribbon around her neck, and mule slippers, all of which accentuated her nakedness, sexuality, and comfortable courtesan lifestyle. The orchid, upswept hair, black cat, and bouquet of flowers were all recognized symbols of sexuality at the time. This modern Venus' body is thin, counter to prevailing standards; the painting's lack of idealism rankled viewers. The painting's flatness, inspired by Japanese wood block art, serves to make the nude more human and less voluptuous. A fully dressed black servant is featured, exploiting the then-current theory that black people were hyper-sexed. That she is wearing the clothing of a servant to a courtesan here furthers the sexual tension of the piece. Olympia's body as well as her gaze is unabashedly confrontational. She defiantly looks out as her servant offers flowers from one of her male suitors. Although her hand rests on her leg, hiding her pubic area, the reference to traditional female virtue is ironic; a notion of modesty is notoriously absent in this work. A contemporary critic denounced Olympia's "shamelessly flexed" left hand, which seemed to him a mockery of the relaxed, shielding hand of Titian's Venus. Likewise, the alert black cat at the foot of the bed strikes a sexually rebellious note in contrast to that of the sleeping dog in Titian's portrayal of the goddess in his Venus of Urbino. Olympia was the subject of caricatures in the popular press, but was championed by the French avant-garde community, and the painting's significance was appreciated by artists such as Gustave Courbet, Paul Cézanne, Claude Monet, and later Paul Gauguin. As with Luncheon on the Grass, the painting raised the issue of prostitution within contemporary France and the roles of women within society. After the death of his father in 1862, Manet married Suzanne Leenhoff in 1863 at a Protestant church. Leenhoff was a Dutch-born piano teacher two years Manet's senior with whom he had been romantically involved for approximately ten years. Leenhoff initially had been employed by Manet's father, Auguste, to teach Manet and his younger brother piano. She also may have been Auguste's mistress. In 1852, Leenhoff gave birth, out of wedlock, to a son, Leon Koella Leenhoff. Manet painted his wife in The Reading, among other paintings. Her son, Leon Leenhoff, whose father may have been either of the Manets, posed often for Manet. Most famously, he is the subject of the Boy Carrying a Sword of 1861 (Metropolitan Museum of Art, New York). He also appears as the boy carrying a tray in the background of The Balcony (1868–69). Manet became friends with the Impressionists Edgar Degas, Claude Monet, Pierre-Auguste Renoir, Alfred Sisley, Paul Cézanne, and Camille Pissarro through another painter, Berthe Morisot, who was a member of the group and drew him into their activities. They later became widely known as the Batignolles group (Le groupe des Batignolles). The supposed grand-niece of the painter Jean-Honoré Fragonard, Morisot had her first painting accepted in the Salon de Paris in 1864, and she continued to show in the salon for the next ten years. Manet became the friend and colleague of Morisot in 1868. She is credited with convincing Manet to attempt plein air painting, which she had been practicing since she was introduced to it by another friend of hers, Camille Corot. They had a reciprocating relationship and Manet incorporated some of her techniques into his paintings. In 1874, she became his sister-in-law when she married his brother, Eugène. Unlike the core Impressionist group, Manet maintained that modern artists should seek to exhibit at the Paris Salon rather than abandon it in favor of independent exhibitions. Nevertheless, when Manet was excluded from the International Exhibition of 1867, he set up his own exhibition. His mother worried that he would waste all his inheritance on this project, which was enormously expensive. While the exhibition earned poor reviews from the major critics, it also provided his first contacts with several future Impressionist painters, including Degas. Although his own work influenced and anticipated the Impressionist style, Manet resisted involvement in Impressionist exhibitions, partly because he did not wish to be seen as the representative of a group identity, and partly because he preferred to exhibit at the Salon. Eva Gonzalès, a daughter of the novelist Emmanuel Gonzalès, was his only formal student. He was influenced by the Impressionists, especially Monet and Morisot. Their influence is seen in Manet's use of lighter colors: after the early 1870s he made less use of dark backgrounds but retained his distinctive use of black, uncharacteristic of Impressionist painting. He painted many outdoor (plein air) pieces, but always returned to what he considered the serious work of the studio. Manet enjoyed a close friendship with composer Emmanuel Chabrier, painting two portraits of him; the musician owned 14 of Manet's paintings and dedicated his Impromptu to Manet's wife. One of Manet's frequent models at the beginning of the 1880s was the "semimondaine" Méry Laurent, who posed for seven portraits in pastel. Laurent's salons hosted many French (and even American) writers and painters of her time; Manet had connections and influence through such events. Throughout his life, although resisted by art critics, Manet could number as his champions Émile Zola, who supported him publicly in the press, Stéphane Mallarmé, and Charles Baudelaire, who challenged him to depict life as it was. Manet, in turn, drew or painted each of them. Manet's paintings of café scenes are observations of social life in 19th-century Paris. People are depicted drinking beer, listening to music, flirting, reading, or waiting. Many of these paintings were based on sketches executed on the spot. Manet often visited the Brasserie Reichshoffen on boulevard de Rochechourt, upon which he based At the Cafe in 1878. Several people are at the bar, and one woman confronts the viewer while others wait to be served. Such depictions represent the painted journal of a flâneur. These are painted in a style which is loose, referencing Hals and Velázquez, yet they capture the mood and feeling of Parisian night life. They are painted snapshots of bohemianism, urban working people, as well as some of the bourgeoisie. In Corner of a Café-Concert, a man smokes while behind him a waitress serves drinks. In The Beer Drinkers a woman enjoys her beer in the company of a friend. In The Café-Concert, shown at right, a sophisticated gentleman sits at a bar while a waitress stands resolutely in the background, sipping her drink. In The Waitress, a serving woman pauses for a moment behind a seated customer smoking a pipe, while a ballet dancer, with arms extended as she is about to turn, is on stage in the background. Manet also sat at the restaurant on the Avenue de Clichy called Pere Lathuille's, which had a garden in addition to the dining area. One of the paintings he produced here was Chez le père Lathuille (At Pere Lathuille's), in which a man displays an unrequited interest in a woman dining near him. In Le Bon Bock (1873), a large, cheerful, bearded man sits with a pipe in one hand and a glass of beer in the other, looking straight at the viewer. Manet painted the upper class enjoying more formal social activities. In Masked Ball at the Opera, Manet shows a lively crowd of people enjoying a party. Men stand with top hats and long black suits while talking to women with masks and costumes. He included portraits of his friends in this picture. His 1868 painting The Luncheon was posed in the dining room of the Manet house. Manet depicted other popular activities in his work. In The Races at Longchamp, an unusual perspective is employed to underscore the furious energy of racehorses as they rush toward the viewer. In Skating, Manet shows a well dressed woman in the foreground, while others skate behind her. Always there is the sense of active urban life continuing behind the subject, extending outside the frame of the canvas. In View of the International Exhibition, soldiers relax, seated and standing, prosperous couples are talking. There is a gardener, a boy with a dog, a woman on horseback—in short, a sample of the classes and ages of the people of Paris. Manet's response to modern life included works devoted to war, in subjects that may be seen as updated interpretations of the genre of "history painting". The first such work was The Battle of the Kearsarge and the Alabama (1864), a sea skirmish known as the Battle of Cherbourg from the American Civil War which took place off the French coast, and may have been witnessed by the artist. Of interest next was the French intervention in Mexico; from 1867 to 1869 Manet painted three versions of the Execution of Emperor Maximilian, an event which raised concerns regarding French foreign and domestic policy. The several versions of the Execution are among Manet's largest paintings, which suggests that the theme was one which the painter regarded as most important. Its subject is the execution by Mexican firing squad of a Habsburg emperor who had been installed by Napoleon III. Neither the paintings nor a lithograph of the subject were permitted to be shown in France. As an indictment of formalized slaughter, the paintings look back to Goya, and anticipate Picasso's Guernica. During the Franco-Prussian War, Manet served in the National Guard to help defend the city during the siege of Paris, along with Degas. In January 1871, he traveled to Oloron-Sainte-Marie in the Pyrenees. In his absence his friends added his name to the "Fédération des artistes" (see: Courbet) of the Paris Commune. Manet stayed away from Paris, perhaps, until after the semaine sanglante: in a letter to Berthe Morisot at Cherbourg (10 June 1871) he writes, "We came back to Paris a few days ago..." (the semaine sanglante ended on 28 May). The prints and drawings collection of the Museum of Fine Arts (Budapest) has a watercolour/gouache by Manet, The Barricade, depicting a summary execution of Communards by Versailles troops based on a lithograph of the execution of Maximilian. A similar piece, The Barricade (oil on plywood), is held by a private collector. On 18 March 1871, he wrote to his (confederate) friend Félix Bracquemond in Paris about his visit to Bordeaux, the provisional seat of the French National Assembly of the Third French Republic where Émile Zola introduced him to the sites: "I never imagined that France could be represented by such doddering old fools, not excepting that little twit Thiers..." If this could be interpreted as support of the Commune, a following letter to Bracquemond (21 March 1871) expressed his idea more clearly: "Only party hacks and the ambitious, the Henrys of this world following on the heels of the Milliéres, the grotesque imitators of the Commune of 1793". He knew the communard Lucien Henry to have been a former painter's model and Millière, an insurance agent. "What an encouragement all these bloodthirsty caperings are for the arts! But there is at least one consolation in our misfortunes: that we're not politicians and have no desire to be elected as deputies". The public figure Manet admired most was the republican Léon Gambetta. In the heat of the seize mai coup in 1877, Manet opened up his atelier to a republican electoral meeting chaired by Gambetta's friend Eugène Spuller. Manet depicted many scenes of the streets of Paris in his works. The Rue Mosnier Decked with Flags depicts red, white, and blue pennants covering buildings on either side of the street; another painting of the same title features a one-legged man walking with crutches. Again depicting the same street, but this time in a different context, is Rue Mosnier with Pavers, in which men repair the roadway while people and horses move past. The Railway, widely known as The Gare Saint-Lazare, was painted in 1873. The setting is the urban landscape of Paris in the late 19th century. Using his favorite model in his last painting of her, a fellow painter, Victorine Meurent, also the model for Olympia and the Luncheon on the Grass, sits before an iron fence holding a sleeping puppy and an open book in her lap. Next to her is a little girl with her back to the painter, watching a train pass beneath them. Instead of choosing the traditional natural view as background for an outdoor scene, Manet opts for the iron grating which "boldly stretches across the canvas". The only evidence of the train is its white cloud of steam. In the distance, modern apartment buildings are seen. This arrangement compresses the foreground into a narrow focus. The traditional convention of deep space is ignored. Historian Isabelle Dervaux has described the reception this painting received when it was first exhibited at the official Paris Salon of 1874: "Visitors and critics found its subject baffling, its composition incoherent, and its execution sketchy. Caricaturists ridiculed Manet's picture, in which only a few recognized the symbol of modernity that it has become today". The painting is currently in the National Gallery of Art in Washington, D.C. Manet painted several boating subjects in 1874. Boating, now in the Metropolitan Museum of Art, exemplifies in its conciseness the lessons Manet learned from Japanese prints, and the abrupt cropping by the frame of the boat and sail adds to the immediacy of the image. In 1875, a book-length French edition of Edgar Allan Poe's The Raven included lithographs by Manet and translation by Mallarmé. In 1881, with pressure from his friend Antonin Proust, the French government awarded Manet the Légion d'honneur. In his mid-forties Manet's health deteriorated, and he developed severe pain and partial paralysis in his legs. In 1879 he began receiving hydrotherapy treatments at a spa near Meudon intended to improve what he believed was a circulatory problem, but in reality he was suffering from locomotor ataxia, a known side-effect of syphilis. In 1880, he painted a portrait there of the opera singer Émilie Ambre as Carmen. Ambre and her lover Gaston de Beauplan had an estate in Meudon and had organized the first exhibition of Manet's The Execution of Emperor Maximilian in New York in December 1879. In his last years Manet painted many small-scale still lifes of fruits and vegetables, such as A Bunch of Asparagus and The Lemon (both 1880). He completed his last major work, A Bar at the Folies-Bergère (Un Bar aux Folies-Bergère), in 1882, and it hung in the Salon that year. Afterwards, he limited himself to small formats. Manet's last paintings were of flowers in glass vases. There are 20 such paintings known, with the last one painted in March 1883, barely two months before his death. Quoted in Venice thirteen years later, Manet is credited with stating that an artist can say everything he has to say with "flowers, fruit, and clouds." His last flower paintings are a demonstration of that belief. In 2023, the Metropolitan Museum of Art in New York City exhibited a two-person exhibition of Manet with Degas. In April 1883, his left foot was amputated because of gangrene caused by complications from syphilis and rheumatism. He died eleven days later on 30 April in Paris. He is buried in the Passy Cemetery in the city. Manet's public career lasted from 1861, the year of his first participation in the Salon, until his death in 1883. His known extant works, as catalogued in 1975 by Denis Rouart and Daniel Wildenstein, comprise 430 oil paintings, 89 pastels, and more than 400 works on paper. Although harshly condemned by critics who decried its lack of conventional finish, Manet's work had admirers from the beginning. One was Émile Zola, who wrote in 1867: "We are not accustomed to seeing such simple and direct translations of reality. Then, as I said, there is such a surprisingly elegant awkwardness ... it is a truly charming experience to contemplate this luminous and serious painting which interprets nature with a gentle brutality." The roughly painted style and photographic lighting in Manet's paintings was seen as specifically modern, and as a challenge to the Renaissance works he copied or used as source material. He rejected the technique he had learned in the studio of Thomas Couture – in which a painting was constructed using successive layers of paint on a dark-toned ground – in favor of a direct, alla prima method using opaque paint on a light ground. Novel at the time, this method made possible the completion of a painting in a single sitting. It was adopted by the Impressionists, and became the prevalent method of painting in oils for generations that followed. Manet's work is considered "early modern", partially because of the opaque flatness of his surfaces, the frequent sketch-like passages, and the black outlining of figures, all of which draw attention to the surface of the picture plane and the material quality of paint. The art historian Beatrice Farwell says Manet "has been universally regarded as the Father of Modernism. With Courbet he was among the first to take serious risks with the public whose favour he sought, the first to make alla prima painting the standard technique for oil painting and one of the first to take liberties with Renaissance perspective and to offer 'pure painting' as a source of aesthetic pleasure. He was a pioneer, again with Courbet, in the rejection of humanistic and historical subject-matter, and shared with Degas the establishment of modern urban life as acceptable material for high art." The late Manet painting, Le Printemps (1881), sold to the J. Paul Getty Museum for $65.1 million, setting a new auction record for Manet, exceeding its pre-sale estimate of $25–35 million at Christie's on 5 November 2014. The previous auction record was held by Self-Portrait With Palette which sold for $33.2 million at Sotheby's on 22 June 2010.
[ { "paragraph_id": 0, "text": "Édouard Manet (UK: /ˈmæneɪ/, US: /mæˈneɪ, məˈ-/; French: [edwaʁ manɛ]; 23 January 1832 – 30 April 1883) was a French modernist painter. He was one of the first 19th-century artists to paint modern life, as well as a pivotal figure in the transition from Realism to Impressionism.", "title": "" }, { "paragraph_id": 1, "text": "Born into an upper-class household with strong political connections, Manet rejected the naval career originally envisioned for him; he became engrossed in the world of painting. His early masterworks, The Luncheon on the Grass (Le déjeuner sur l'herbe) or Olympia, \"premiering\" in 1863 and '65, respectively, caused great controversy with both critics and the Academy of Fine Arts, but soon were praised by progressive artists as the breakthrough acts to the new style, Impressionism. Today too, these works, along with others, are considered watershed paintings that mark the start of modern art. The last 20 years of Manet's life saw him form bonds with other great artists of the time; he developed his own simple and direct style that would be heralded as innovative and serve as a major influence for future painters.", "title": "" }, { "paragraph_id": 2, "text": "Édouard Manet was born in Paris on 23 January 1832, in the ancestral hôtel particulier (mansion) on the Rue des Petits Augustins (now Rue Bonaparte) to an affluent and well-connected family. His mother, Eugénie-Desirée Fournier, was the daughter of a diplomat and goddaughter of the Swedish crown prince Charles Bernadotte, from whom the Swedish monarchs are descended. His father, Auguste Manet, was a French judge who expected Édouard to pursue a career in law. His uncle, Edmond Fournier, encouraged him to pursue painting and took young Manet to the Louvre. In 1844, he enrolled at secondary school, the Collège Rollin, where he boarded until 1848. He showed little academic talent and was generally unhappy at the school. In 1845, at the advice of his uncle, Manet enrolled in a special course of drawing where he met Antonin Proust, future Minister of Fine Arts and subsequent lifelong friend.", "title": "Early life" }, { "paragraph_id": 3, "text": "At his father's suggestion, in 1848 he sailed on a training vessel to Rio de Janeiro. After he twice failed the examination to join the Navy, his father relented to his wishes to pursue an art education. From 1850 to 1856, Manet studied under the academic painter Thomas Couture. Couture encouraged his students to paint contemporary life, though he would eventually be horrified by Manet's choice of lower-class and \"degenerate\" subjects such as The Absinthe Drinker. In his spare time, Manet copied Old Masters such as Diego Velázquez and Titian in the Louvre.", "title": "Early life" }, { "paragraph_id": 4, "text": "From 1853 to 1856, Manet made brief visits to Germany, Italy, and the Netherlands, during which time he was influenced by the Dutch painter Frans Hals and the Spanish artists Velázquez and Francisco José de Goya.", "title": "Early life" }, { "paragraph_id": 5, "text": "In 1856, Manet opened a studio. His style in this period was characterized by loose brush strokes, simplification of details, and the suppression of transitional tones. Adopting the current style of realism initiated by Gustave Courbet, he painted The Absinthe Drinker (1858–59) and other contemporary subjects such as beggars, singers, Gypsies, people in cafés, and bullfights. After his early career, he rarely painted religious, mythological, or historical subjects; religious paintings from 1864 include his Jesus Mocked by the Soldiers and The Dead Christ with Angels.", "title": "Career" }, { "paragraph_id": 6, "text": "Manet had two canvases accepted at the Salon in 1861. A portrait of his mother and father (Portrait of M. and Mme Auguste Manet), the latter of whom at the time was paralysed by a stroke or advanced syphilis, was ill-received by critics. The other, The Spanish Singer, was admired by Théophile Gautier, and placed in a more conspicuous location as a result of its popularity with Salon-goers. Manet's work, which appeared \"slightly slapdash\" when compared with the meticulous style of so many other Salon paintings, intrigued some young artists and brought new business to his studio. According to one contemporary source, The Spanish Singer, painted in a \"strange new fashion[,] caused many painters' eyes to open and their jaws to drop.\"", "title": "Career" }, { "paragraph_id": 7, "text": "In 1862, Manet exhibited Music in the Tuileries (probably painted in 1860), one of his first masterpieces. With its portrayal of a crowd of subjects at the Jardin des Tuileries, the painting shows the outdoor leisure of contemporary Paris, which would be a lifelong subject of Manet's. Among the figures in the gardens are the poet Charles Baudelaire, the musician Jacques Offenbach, and others of Manet's family and friends, including a self-portrait of the artist.", "title": "Career" }, { "paragraph_id": 8, "text": "Music in the Tuileries received substantial critical and public attention, most of it negative. In the words of one Manet biographer, \"it is difficult for us to imagine the kind of fury Music in the Tuileries provoked when it was exhibited\". By portraying Manet's social circle instead of classical heroes, historical icons, or gods, the painting could be interpreted as challenging the value of those subjects or as an attempt to elevate his contemporaries to the same level. The public, accustomed to the finely detailed brushwork of historical painters such as Ernest Meissonier, thought Manet's thick brushstrokes looked crude and unfinished. Angered by the subject matter and technique, several visitors even threatened to destroy the painting. One of Manet's idols, Eugène Delacroix, was of the painting's few defenders. Despite the largely negative reaction, the controversy made Manet a well-known name in Paris.", "title": "Career" }, { "paragraph_id": 9, "text": "Another major early work is The Luncheon on the Grass (Le Déjeuner sur l'herbe), originally Le Bain. The Paris Salon rejected it for exhibition in 1863, but Manet agreed to exhibit it at the Salon des Refusés (Salon of the Rejected). This parallel salon was initiated by Emperor Napoleon III as a solution to the public outcry after the official salon's Selection Committee only accepted 2217 paintings out of more than 5000 submissions, and allowed rejected artists to still display their paintings if they chose.", "title": "Career" }, { "paragraph_id": 10, "text": "The painting's juxtaposition of fully dressed men and a nude woman was controversial, as was its abbreviated, sketch-like handling, an innovation that distinguished Manet from Courbet. One critic stated that the brushwork appeared to have been done with a \"floor mop\". However, others such as his friend Antonin Proust celebrated the painting, and novelist Émile Zola was so affected by the experience of viewing it that he later based the title painting in his novel L'Œuvre (\"The Work of Art\") on Le Déjeuner sur l'herbe.", "title": "Career" }, { "paragraph_id": 11, "text": "At the same time, Manet's composition reveals his study of the old masters, as the disposition of the main figures is derived from Marcantonio Raimondi's engraving of the Judgement of Paris (c. 1515) based on a drawing by Raphael. Two additional works cited by scholars as important precedents for Le Déjeuner sur l'herbe are Pastoral Concert (c. 1510) and The Tempest, both of which are attributed variously to Italian Renaissance masters Giorgione or Titian.", "title": "Career" }, { "paragraph_id": 12, "text": "Le Déjeuner and James McNeill Whistler's Symphony in White, No. 1: The White Girl were the two most discussed works of the Salon des Refusés, which itself would become one of the most famous art exhibitions of all time. Following the Salon, Manet became yet more notorious and widely discussed. However, Le Déjeuner sur l'herbe and Manet's other paintings still failed to sell, and Manet continued living off of his inheritance from his recently deceased father.", "title": "Career" }, { "paragraph_id": 13, "text": "As he had in Luncheon on the Grass, Manet again paraphrased a respected work by a Renaissance artist in the painting Olympia (1863), a nude portrayed in a style reminiscent of early studio photographs, but whose pose was based on Titian's Venus of Urbino (1538). The painting is also reminiscent of Francisco Goya's painting The Nude Maja (1800).", "title": "Career" }, { "paragraph_id": 14, "text": "Manet embarked on the canvas after being challenged to give the Salon a nude painting to display. His uniquely frank depiction of a self-assured prostitute was accepted by the Paris Salon in 1865, where it created a scandal. According to Antonin Proust, \"only the precautions taken by the administration prevented the painting being punctured and torn\" by offended viewers. The painting was controversial partly because the nude is wearing some small items of clothing such as an orchid in her hair, a bracelet, a ribbon around her neck, and mule slippers, all of which accentuated her nakedness, sexuality, and comfortable courtesan lifestyle. The orchid, upswept hair, black cat, and bouquet of flowers were all recognized symbols of sexuality at the time. This modern Venus' body is thin, counter to prevailing standards; the painting's lack of idealism rankled viewers. The painting's flatness, inspired by Japanese wood block art, serves to make the nude more human and less voluptuous. A fully dressed black servant is featured, exploiting the then-current theory that black people were hyper-sexed. That she is wearing the clothing of a servant to a courtesan here furthers the sexual tension of the piece.", "title": "Career" }, { "paragraph_id": 15, "text": "Olympia's body as well as her gaze is unabashedly confrontational. She defiantly looks out as her servant offers flowers from one of her male suitors. Although her hand rests on her leg, hiding her pubic area, the reference to traditional female virtue is ironic; a notion of modesty is notoriously absent in this work. A contemporary critic denounced Olympia's \"shamelessly flexed\" left hand, which seemed to him a mockery of the relaxed, shielding hand of Titian's Venus. Likewise, the alert black cat at the foot of the bed strikes a sexually rebellious note in contrast to that of the sleeping dog in Titian's portrayal of the goddess in his Venus of Urbino.", "title": "Career" }, { "paragraph_id": 16, "text": "Olympia was the subject of caricatures in the popular press, but was championed by the French avant-garde community, and the painting's significance was appreciated by artists such as Gustave Courbet, Paul Cézanne, Claude Monet, and later Paul Gauguin.", "title": "Career" }, { "paragraph_id": 17, "text": "As with Luncheon on the Grass, the painting raised the issue of prostitution within contemporary France and the roles of women within society.", "title": "Career" }, { "paragraph_id": 18, "text": "After the death of his father in 1862, Manet married Suzanne Leenhoff in 1863 at a Protestant church. Leenhoff was a Dutch-born piano teacher two years Manet's senior with whom he had been romantically involved for approximately ten years. Leenhoff initially had been employed by Manet's father, Auguste, to teach Manet and his younger brother piano. She also may have been Auguste's mistress. In 1852, Leenhoff gave birth, out of wedlock, to a son, Leon Koella Leenhoff.", "title": "Career" }, { "paragraph_id": 19, "text": "Manet painted his wife in The Reading, among other paintings. Her son, Leon Leenhoff, whose father may have been either of the Manets, posed often for Manet. Most famously, he is the subject of the Boy Carrying a Sword of 1861 (Metropolitan Museum of Art, New York). He also appears as the boy carrying a tray in the background of The Balcony (1868–69).", "title": "Career" }, { "paragraph_id": 20, "text": "Manet became friends with the Impressionists Edgar Degas, Claude Monet, Pierre-Auguste Renoir, Alfred Sisley, Paul Cézanne, and Camille Pissarro through another painter, Berthe Morisot, who was a member of the group and drew him into their activities. They later became widely known as the Batignolles group (Le groupe des Batignolles).", "title": "Career" }, { "paragraph_id": 21, "text": "The supposed grand-niece of the painter Jean-Honoré Fragonard, Morisot had her first painting accepted in the Salon de Paris in 1864, and she continued to show in the salon for the next ten years.", "title": "Career" }, { "paragraph_id": 22, "text": "Manet became the friend and colleague of Morisot in 1868. She is credited with convincing Manet to attempt plein air painting, which she had been practicing since she was introduced to it by another friend of hers, Camille Corot. They had a reciprocating relationship and Manet incorporated some of her techniques into his paintings. In 1874, she became his sister-in-law when she married his brother, Eugène.", "title": "Career" }, { "paragraph_id": 23, "text": "Unlike the core Impressionist group, Manet maintained that modern artists should seek to exhibit at the Paris Salon rather than abandon it in favor of independent exhibitions. Nevertheless, when Manet was excluded from the International Exhibition of 1867, he set up his own exhibition. His mother worried that he would waste all his inheritance on this project, which was enormously expensive. While the exhibition earned poor reviews from the major critics, it also provided his first contacts with several future Impressionist painters, including Degas.", "title": "Career" }, { "paragraph_id": 24, "text": "Although his own work influenced and anticipated the Impressionist style, Manet resisted involvement in Impressionist exhibitions, partly because he did not wish to be seen as the representative of a group identity, and partly because he preferred to exhibit at the Salon. Eva Gonzalès, a daughter of the novelist Emmanuel Gonzalès, was his only formal student.", "title": "Career" }, { "paragraph_id": 25, "text": "He was influenced by the Impressionists, especially Monet and Morisot. Their influence is seen in Manet's use of lighter colors: after the early 1870s he made less use of dark backgrounds but retained his distinctive use of black, uncharacteristic of Impressionist painting. He painted many outdoor (plein air) pieces, but always returned to what he considered the serious work of the studio.", "title": "Career" }, { "paragraph_id": 26, "text": "Manet enjoyed a close friendship with composer Emmanuel Chabrier, painting two portraits of him; the musician owned 14 of Manet's paintings and dedicated his Impromptu to Manet's wife.", "title": "Career" }, { "paragraph_id": 27, "text": "One of Manet's frequent models at the beginning of the 1880s was the \"semimondaine\" Méry Laurent, who posed for seven portraits in pastel. Laurent's salons hosted many French (and even American) writers and painters of her time; Manet had connections and influence through such events.", "title": "Career" }, { "paragraph_id": 28, "text": "Throughout his life, although resisted by art critics, Manet could number as his champions Émile Zola, who supported him publicly in the press, Stéphane Mallarmé, and Charles Baudelaire, who challenged him to depict life as it was. Manet, in turn, drew or painted each of them.", "title": "Career" }, { "paragraph_id": 29, "text": "Manet's paintings of café scenes are observations of social life in 19th-century Paris. People are depicted drinking beer, listening to music, flirting, reading, or waiting. Many of these paintings were based on sketches executed on the spot. Manet often visited the Brasserie Reichshoffen on boulevard de Rochechourt, upon which he based At the Cafe in 1878. Several people are at the bar, and one woman confronts the viewer while others wait to be served. Such depictions represent the painted journal of a flâneur. These are painted in a style which is loose, referencing Hals and Velázquez, yet they capture the mood and feeling of Parisian night life. They are painted snapshots of bohemianism, urban working people, as well as some of the bourgeoisie.", "title": "Career" }, { "paragraph_id": 30, "text": "In Corner of a Café-Concert, a man smokes while behind him a waitress serves drinks. In The Beer Drinkers a woman enjoys her beer in the company of a friend. In The Café-Concert, shown at right, a sophisticated gentleman sits at a bar while a waitress stands resolutely in the background, sipping her drink. In The Waitress, a serving woman pauses for a moment behind a seated customer smoking a pipe, while a ballet dancer, with arms extended as she is about to turn, is on stage in the background.", "title": "Career" }, { "paragraph_id": 31, "text": "Manet also sat at the restaurant on the Avenue de Clichy called Pere Lathuille's, which had a garden in addition to the dining area. One of the paintings he produced here was Chez le père Lathuille (At Pere Lathuille's), in which a man displays an unrequited interest in a woman dining near him.", "title": "Career" }, { "paragraph_id": 32, "text": "In Le Bon Bock (1873), a large, cheerful, bearded man sits with a pipe in one hand and a glass of beer in the other, looking straight at the viewer.", "title": "Career" }, { "paragraph_id": 33, "text": "Manet painted the upper class enjoying more formal social activities. In Masked Ball at the Opera, Manet shows a lively crowd of people enjoying a party. Men stand with top hats and long black suits while talking to women with masks and costumes. He included portraits of his friends in this picture.", "title": "Career" }, { "paragraph_id": 34, "text": "His 1868 painting The Luncheon was posed in the dining room of the Manet house.", "title": "Career" }, { "paragraph_id": 35, "text": "Manet depicted other popular activities in his work. In The Races at Longchamp, an unusual perspective is employed to underscore the furious energy of racehorses as they rush toward the viewer. In Skating, Manet shows a well dressed woman in the foreground, while others skate behind her. Always there is the sense of active urban life continuing behind the subject, extending outside the frame of the canvas.", "title": "Career" }, { "paragraph_id": 36, "text": "In View of the International Exhibition, soldiers relax, seated and standing, prosperous couples are talking. There is a gardener, a boy with a dog, a woman on horseback—in short, a sample of the classes and ages of the people of Paris.", "title": "Career" }, { "paragraph_id": 37, "text": "Manet's response to modern life included works devoted to war, in subjects that may be seen as updated interpretations of the genre of \"history painting\". The first such work was The Battle of the Kearsarge and the Alabama (1864), a sea skirmish known as the Battle of Cherbourg from the American Civil War which took place off the French coast, and may have been witnessed by the artist.", "title": "Career" }, { "paragraph_id": 38, "text": "Of interest next was the French intervention in Mexico; from 1867 to 1869 Manet painted three versions of the Execution of Emperor Maximilian, an event which raised concerns regarding French foreign and domestic policy. The several versions of the Execution are among Manet's largest paintings, which suggests that the theme was one which the painter regarded as most important. Its subject is the execution by Mexican firing squad of a Habsburg emperor who had been installed by Napoleon III. Neither the paintings nor a lithograph of the subject were permitted to be shown in France. As an indictment of formalized slaughter, the paintings look back to Goya, and anticipate Picasso's Guernica.", "title": "Career" }, { "paragraph_id": 39, "text": "During the Franco-Prussian War, Manet served in the National Guard to help defend the city during the siege of Paris, along with Degas. In January 1871, he traveled to Oloron-Sainte-Marie in the Pyrenees. In his absence his friends added his name to the \"Fédération des artistes\" (see: Courbet) of the Paris Commune. Manet stayed away from Paris, perhaps, until after the semaine sanglante: in a letter to Berthe Morisot at Cherbourg (10 June 1871) he writes, \"We came back to Paris a few days ago...\" (the semaine sanglante ended on 28 May).", "title": "Career" }, { "paragraph_id": 40, "text": "The prints and drawings collection of the Museum of Fine Arts (Budapest) has a watercolour/gouache by Manet, The Barricade, depicting a summary execution of Communards by Versailles troops based on a lithograph of the execution of Maximilian. A similar piece, The Barricade (oil on plywood), is held by a private collector.", "title": "Career" }, { "paragraph_id": 41, "text": "On 18 March 1871, he wrote to his (confederate) friend Félix Bracquemond in Paris about his visit to Bordeaux, the provisional seat of the French National Assembly of the Third French Republic where Émile Zola introduced him to the sites: \"I never imagined that France could be represented by such doddering old fools, not excepting that little twit Thiers...\" If this could be interpreted as support of the Commune, a following letter to Bracquemond (21 March 1871) expressed his idea more clearly: \"Only party hacks and the ambitious, the Henrys of this world following on the heels of the Milliéres, the grotesque imitators of the Commune of 1793\". He knew the communard Lucien Henry to have been a former painter's model and Millière, an insurance agent. \"What an encouragement all these bloodthirsty caperings are for the arts! But there is at least one consolation in our misfortunes: that we're not politicians and have no desire to be elected as deputies\".", "title": "Career" }, { "paragraph_id": 42, "text": "The public figure Manet admired most was the republican Léon Gambetta. In the heat of the seize mai coup in 1877, Manet opened up his atelier to a republican electoral meeting chaired by Gambetta's friend Eugène Spuller.", "title": "Career" }, { "paragraph_id": 43, "text": "Manet depicted many scenes of the streets of Paris in his works. The Rue Mosnier Decked with Flags depicts red, white, and blue pennants covering buildings on either side of the street; another painting of the same title features a one-legged man walking with crutches. Again depicting the same street, but this time in a different context, is Rue Mosnier with Pavers, in which men repair the roadway while people and horses move past.", "title": "Career" }, { "paragraph_id": 44, "text": "The Railway, widely known as The Gare Saint-Lazare, was painted in 1873. The setting is the urban landscape of Paris in the late 19th century. Using his favorite model in his last painting of her, a fellow painter, Victorine Meurent, also the model for Olympia and the Luncheon on the Grass, sits before an iron fence holding a sleeping puppy and an open book in her lap. Next to her is a little girl with her back to the painter, watching a train pass beneath them.", "title": "Career" }, { "paragraph_id": 45, "text": "Instead of choosing the traditional natural view as background for an outdoor scene, Manet opts for the iron grating which \"boldly stretches across the canvas\". The only evidence of the train is its white cloud of steam. In the distance, modern apartment buildings are seen. This arrangement compresses the foreground into a narrow focus. The traditional convention of deep space is ignored.", "title": "Career" }, { "paragraph_id": 46, "text": "Historian Isabelle Dervaux has described the reception this painting received when it was first exhibited at the official Paris Salon of 1874: \"Visitors and critics found its subject baffling, its composition incoherent, and its execution sketchy. Caricaturists ridiculed Manet's picture, in which only a few recognized the symbol of modernity that it has become today\". The painting is currently in the National Gallery of Art in Washington, D.C.", "title": "Career" }, { "paragraph_id": 47, "text": "Manet painted several boating subjects in 1874. Boating, now in the Metropolitan Museum of Art, exemplifies in its conciseness the lessons Manet learned from Japanese prints, and the abrupt cropping by the frame of the boat and sail adds to the immediacy of the image.", "title": "Career" }, { "paragraph_id": 48, "text": "In 1875, a book-length French edition of Edgar Allan Poe's The Raven included lithographs by Manet and translation by Mallarmé.", "title": "Career" }, { "paragraph_id": 49, "text": "In 1881, with pressure from his friend Antonin Proust, the French government awarded Manet the Légion d'honneur.", "title": "Career" }, { "paragraph_id": 50, "text": "In his mid-forties Manet's health deteriorated, and he developed severe pain and partial paralysis in his legs. In 1879 he began receiving hydrotherapy treatments at a spa near Meudon intended to improve what he believed was a circulatory problem, but in reality he was suffering from locomotor ataxia, a known side-effect of syphilis. In 1880, he painted a portrait there of the opera singer Émilie Ambre as Carmen. Ambre and her lover Gaston de Beauplan had an estate in Meudon and had organized the first exhibition of Manet's The Execution of Emperor Maximilian in New York in December 1879.", "title": "Career" }, { "paragraph_id": 51, "text": "In his last years Manet painted many small-scale still lifes of fruits and vegetables, such as A Bunch of Asparagus and The Lemon (both 1880). He completed his last major work, A Bar at the Folies-Bergère (Un Bar aux Folies-Bergère), in 1882, and it hung in the Salon that year. Afterwards, he limited himself to small formats.", "title": "Career" }, { "paragraph_id": 52, "text": "Manet's last paintings were of flowers in glass vases. There are 20 such paintings known, with the last one painted in March 1883, barely two months before his death. Quoted in Venice thirteen years later, Manet is credited with stating that an artist can say everything he has to say with \"flowers, fruit, and clouds.\" His last flower paintings are a demonstration of that belief.", "title": "Career" }, { "paragraph_id": 53, "text": "In 2023, the Metropolitan Museum of Art in New York City exhibited a two-person exhibition of Manet with Degas.", "title": "Career" }, { "paragraph_id": 54, "text": "In April 1883, his left foot was amputated because of gangrene caused by complications from syphilis and rheumatism. He died eleven days later on 30 April in Paris. He is buried in the Passy Cemetery in the city.", "title": "Death" }, { "paragraph_id": 55, "text": "Manet's public career lasted from 1861, the year of his first participation in the Salon, until his death in 1883. His known extant works, as catalogued in 1975 by Denis Rouart and Daniel Wildenstein, comprise 430 oil paintings, 89 pastels, and more than 400 works on paper.", "title": "Legacy" }, { "paragraph_id": 56, "text": "Although harshly condemned by critics who decried its lack of conventional finish, Manet's work had admirers from the beginning. One was Émile Zola, who wrote in 1867: \"We are not accustomed to seeing such simple and direct translations of reality. Then, as I said, there is such a surprisingly elegant awkwardness ... it is a truly charming experience to contemplate this luminous and serious painting which interprets nature with a gentle brutality.\"", "title": "Legacy" }, { "paragraph_id": 57, "text": "The roughly painted style and photographic lighting in Manet's paintings was seen as specifically modern, and as a challenge to the Renaissance works he copied or used as source material. He rejected the technique he had learned in the studio of Thomas Couture – in which a painting was constructed using successive layers of paint on a dark-toned ground – in favor of a direct, alla prima method using opaque paint on a light ground. Novel at the time, this method made possible the completion of a painting in a single sitting. It was adopted by the Impressionists, and became the prevalent method of painting in oils for generations that followed. Manet's work is considered \"early modern\", partially because of the opaque flatness of his surfaces, the frequent sketch-like passages, and the black outlining of figures, all of which draw attention to the surface of the picture plane and the material quality of paint.", "title": "Legacy" }, { "paragraph_id": 58, "text": "The art historian Beatrice Farwell says Manet \"has been universally regarded as the Father of Modernism. With Courbet he was among the first to take serious risks with the public whose favour he sought, the first to make alla prima painting the standard technique for oil painting and one of the first to take liberties with Renaissance perspective and to offer 'pure painting' as a source of aesthetic pleasure. He was a pioneer, again with Courbet, in the rejection of humanistic and historical subject-matter, and shared with Degas the establishment of modern urban life as acceptable material for high art.\"", "title": "Legacy" }, { "paragraph_id": 59, "text": "The late Manet painting, Le Printemps (1881), sold to the J. Paul Getty Museum for $65.1 million, setting a new auction record for Manet, exceeding its pre-sale estimate of $25–35 million at Christie's on 5 November 2014. The previous auction record was held by Self-Portrait With Palette which sold for $33.2 million at Sotheby's on 22 June 2010.", "title": "Legacy" } ]
Édouard Manet was a French modernist painter. He was one of the first 19th-century artists to paint modern life, as well as a pivotal figure in the transition from Realism to Impressionism. Born into an upper-class household with strong political connections, Manet rejected the naval career originally envisioned for him; he became engrossed in the world of painting. His early masterworks, The Luncheon on the Grass or Olympia, "premiering" in 1863 and '65, respectively, caused great controversy with both critics and the Academy of Fine Arts, but soon were praised by progressive artists as the breakthrough acts to the new style, Impressionism. Today too, these works, along with others, are considered watershed paintings that mark the start of modern art. The last 20 years of Manet's life saw him form bonds with other great artists of the time; he developed his own simple and direct style that would be heralded as innovative and serve as a major influence for future painters.
2001-08-24T01:21:06Z
2023-12-30T02:05:30Z
[ "Template:Authority control (arts)", "Template:Redirect-for-distinguish", "Template:Use British English", "Template:Sfn", "Template:Efn", "Template:Internet Archive author", "Template:Modernism", "Template:Lang", "Template:Cite web", "Template:Cite AV media", "Template:Notelist", "Template:Berthe Morisot", "Template:IPAc-en", "Template:Cite LPD", "Template:ISBN", "Template:Base Léonore", "Template:Manet", "Template:Impressionists", "Template:Cite news", "Template:Use dmy dates", "Template:Infobox artist", "Template:Main", "Template:Reflist", "Template:Cite book", "Template:Webarchive", "Template:Short description", "Template:IPA-fr", "Template:Cite EPD", "Template:Circa", "Template:Sisterlinks" ]
https://en.wikipedia.org/wiki/%C3%89douard_Manet
9,616
Evolutionarily stable strategy
An evolutionarily stable strategy (ESS) is a strategy (or set of strategies) that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy (or set of strategies) which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science. In game-theoretical terms, an ESS is an equilibrium refinement of the Nash equilibrium, being a Nash equilibrium that is also "evolutionarily stable." Thus, once fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from replacing it (although this does not preclude the possibility that a better strategy, or set of strategies, will emerge in response to selective pressures resulting from environmental change). Evolutionarily stable strategies were defined and introduced by John Maynard Smith and George R. Price in a 1973 Nature paper. Such was the time taken in peer-reviewing the paper for Nature that this was preceded by a 1972 essay by Maynard Smith in a book of essays titled On Evolution. The 1972 essay is sometimes cited instead of the 1973 paper, but university libraries are much more likely to have copies of Nature. Papers in Nature are usually short; in 1974, Maynard Smith published a longer paper in the Journal of Theoretical Biology. Maynard Smith explains further in his 1982 book Evolution and the Theory of Games. Sometimes these are cited instead. In fact, the ESS has become so central to game theory that often no citation is given, as the reader is assumed to be familiar with it. Maynard Smith mathematically formalised a verbal argument made by Price, which he read while peer-reviewing Price's paper. When Maynard Smith realized that the somewhat disorganised Price was not ready to revise his article for publication, he offered to add Price as co-author. The concept was derived from R. H. MacArthur and W. D. Hamilton's work on sex ratios, derived from Fisher's principle, especially Hamilton's (1967) concept of an unbeatable strategy. Maynard Smith was jointly awarded the 1999 Crafoord Prize for his development of the concept of evolutionarily stable strategies and the application of game theory to the evolution of behaviour. Uses of ESS: The Nash equilibrium is the traditional solution concept in game theory. It depends on the cognitive abilities of the players. It is assumed that players are aware of the structure of the game and consciously try to predict the moves of their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies. Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the game. They reproduce and are subject to the forces of natural selection, with the payoffs of the game representing reproductive success (biological fitness). It is imagined that alternative strategies of the game occasionally occur, via a process like mutation. To be an ESS, a strategy must be resistant to these alternatives. Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes. An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if for both players, for any strategy T: In this definition, a strategy T≠S can be a neutral alternative to S (scoring equally well, but not better). A Nash equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive for players to adopt T instead of S. This fact represents the point of departure of the ESS. Maynard Smith and Price specify two conditions for a strategy S to be an ESS. For all T≠S, either The first condition is sometimes called a strict Nash equilibrium. The second is sometimes called "Maynard Smith's second condition". The second condition means that although strategy T is neutral with respect to the payoff against strategy S, the population of players who continue to play strategy S has an advantage when playing against T. There is also an alternative, stronger definition of ESS, due to Thomas. This places a different emphasis on the role of the Nash equilibrium concept in the ESS concept. Following the terminology given in the first definition above, this definition requires that for all T≠S In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example, each pure strategy in the coordination game below is an ESS by the first definition but not the second. In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher than (or equal to) the payoff of the first player when he changes to another strategy T and the second player keeps his strategy S and the payoff of the first player when only his opponent changes his strategy to T is higher than his payoff in case that both of players change their strategies to T. This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a natural definition of related concepts such as a weak ESS or an evolutionarily stable set. In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the prisoner's dilemma there is only one Nash equilibrium, and its strategy (Defect) is also an ESS. Some games may have Nash equilibria that are not ESSes. For example, in harm thy neighbor (whose payoff matrix is shown here) both (A, A) and (B, B) are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists and predominate, because B scores higher against B than A does against B. This dynamic is captured by Maynard Smith's second condition, since E(A, A) = E(B, A), but it is not the case that E(A,B) > E(B,B). Nash equilibria with equally scoring alternatives can be ESSes. For example, in the game Harm everyone, C is an ESS because it satisfies Maynard Smith's second condition. D strategists may temporarily invade a population of C strategists by scoring equally well against C, but they pay a price when they begin to play against each other; C scores better against D than does D. So here although E(C, C) = E(D, C), it is also the case that E(C,D) > E(D,D). As a result, C is an ESS. Even if a game has pure strategy Nash equilibria, it might be that none of those pure strategies are ESS. Consider the Game of chicken. There are two pure strategy Nash equilibria in this game (Swerve, Stay) and (Stay, Swerve). However, in the absence of an uncorrelated asymmetry, neither Swerve nor Stay are ESSes. There is a third Nash equilibrium, a mixed strategy which is an ESS for this game (see Hawk-dove game and Best response for explanation). This last example points to an important difference between Nash equilibria and ESS. Nash equilibria are defined on strategy sets (a specification of a strategy for each player), while ESS are defined in terms of strategies themselves. The equilibria defined by ESS must always be symmetric, and thus have fewer equilibrium points. In population biology, the two concepts of an evolutionarily stable strategy (ESS) and an evolutionarily stable state are closely linked but describe different situations. In an evolutionarily stable strategy, if all the members of a population adopt it, no mutant strategy can invade. Once virtually all members of the population use this strategy, there is no 'rational' alternative. ESS is part of classical game theory. In an evolutionarily stable state, a population's genetic composition is restored by selection after a disturbance, if the disturbance is not too large. An evolutionarily stable state is a dynamic property of a population that returns to using a strategy, or mix of strategies, if it is perturbed from that initial state. It is part of population genetics, dynamical system, or evolutionary game theory. This is now called convergent stability. B. Thomas (1984) applies the term ESS to an individual strategy which may be mixed, and evolutionarily stable population state to a population mixture of pure strategies which may be formally equivalent to the mixed ESS. Whether a population is evolutionarily stable does not relate to its genetic diversity: it can be genetically monomorphic or polymorphic. In the classic definition of an ESS, no mutant strategy can invade. In finite populations, any mutant could in principle invade, albeit at low probability, implying that no ESS can exist. In an infinite population, an ESS can instead be defined as a strategy which, should it become invaded by a new mutant strategy with probability p, would be able to counterinvade from a single starting individual with probability >p, as illustrated by the evolution of bet-hedging. A common model of altruism and social cooperation is the Prisoner's dilemma. Here a group of players would collectively be better off if they could play Cooperate, but since Defect fares better each individual player has an incentive to play Defect. One solution to this problem is to introduce the possibility of retaliation by having individuals play the game repeatedly against the same player. In the so-called iterated Prisoner's dilemma, the same two individuals play the prisoner's dilemma over and over. While the Prisoner's dilemma has only two strategies (Cooperate and Defect), the iterated Prisoner's dilemma has a huge number of possible strategies. Since an individual can have different contingency plan for each history and the game may be repeated an indefinite number of times, there may in fact be an infinite number of such contingency plans. Three simple contingency plans which have received substantial attention are Always Defect, Always Cooperate, and Tit for Tat. The first two strategies do the same thing regardless of the other player's actions, while the latter responds on the next round by doing what was done to it on the previous round—it responds to Cooperate with Cooperate and Defect with Defect. If the entire population plays Tit-for-Tat and a mutant arises who plays Always Defect, Tit-for-Tat will outperform Always Defect. If the population of the mutant becomes too large — the percentage of the mutant will be kept small. Tit for Tat is therefore an ESS, with respect to only these two strategies. On the other hand, an island of Always Defect players will be stable against the invasion of a few Tit-for-Tat players, but not against a large number of them. If we introduce Always Cooperate, a population of Tit-for-Tat is no longer an ESS. Since a population of Tit-for-Tat players always cooperates, the strategy Always Cooperate behaves identically in this population. As a result, a mutant who plays Always Cooperate will not be eliminated. However, even though a population of Always Cooperate and Tit-for-Tat can coexist, if there is a small percentage of the population that is Always Defect, the selective pressure is against Always Cooperate, and in favour of Tit-for-Tat. This is due to the lower payoffs of cooperating than those of defecting in case the opponent defects. This demonstrates the difficulties in applying the formal definition of an ESS to games with large strategy spaces, and has motivated some to consider alternatives. The fields of sociobiology and evolutionary psychology attempt to explain animal and human behavior and social structures, largely in terms of evolutionarily stable strategies. Sociopathy (chronic antisocial or criminal behavior) may be a result of a combination of two such strategies. Evolutionarily stable strategies were originally considered for biological evolution, but they can apply to other contexts. In fact, there are stable states for a large class of adaptive dynamics. As a result, they can be used to explain human behaviours that lack any genetic influences.
[ { "paragraph_id": 0, "text": "An evolutionarily stable strategy (ESS) is a strategy (or set of strategies) that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy (or set of strategies) which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science.", "title": "" }, { "paragraph_id": 1, "text": "In game-theoretical terms, an ESS is an equilibrium refinement of the Nash equilibrium, being a Nash equilibrium that is also \"evolutionarily stable.\" Thus, once fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from replacing it (although this does not preclude the possibility that a better strategy, or set of strategies, will emerge in response to selective pressures resulting from environmental change).", "title": "" }, { "paragraph_id": 2, "text": "Evolutionarily stable strategies were defined and introduced by John Maynard Smith and George R. Price in a 1973 Nature paper. Such was the time taken in peer-reviewing the paper for Nature that this was preceded by a 1972 essay by Maynard Smith in a book of essays titled On Evolution. The 1972 essay is sometimes cited instead of the 1973 paper, but university libraries are much more likely to have copies of Nature. Papers in Nature are usually short; in 1974, Maynard Smith published a longer paper in the Journal of Theoretical Biology. Maynard Smith explains further in his 1982 book Evolution and the Theory of Games. Sometimes these are cited instead. In fact, the ESS has become so central to game theory that often no citation is given, as the reader is assumed to be familiar with it.", "title": "History" }, { "paragraph_id": 3, "text": "Maynard Smith mathematically formalised a verbal argument made by Price, which he read while peer-reviewing Price's paper. When Maynard Smith realized that the somewhat disorganised Price was not ready to revise his article for publication, he offered to add Price as co-author.", "title": "History" }, { "paragraph_id": 4, "text": "The concept was derived from R. H. MacArthur and W. D. Hamilton's work on sex ratios, derived from Fisher's principle, especially Hamilton's (1967) concept of an unbeatable strategy. Maynard Smith was jointly awarded the 1999 Crafoord Prize for his development of the concept of evolutionarily stable strategies and the application of game theory to the evolution of behaviour.", "title": "History" }, { "paragraph_id": 5, "text": "Uses of ESS:", "title": "History" }, { "paragraph_id": 6, "text": "The Nash equilibrium is the traditional solution concept in game theory. It depends on the cognitive abilities of the players. It is assumed that players are aware of the structure of the game and consciously try to predict the moves of their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies.", "title": "Motivation" }, { "paragraph_id": 7, "text": "Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the game. They reproduce and are subject to the forces of natural selection, with the payoffs of the game representing reproductive success (biological fitness). It is imagined that alternative strategies of the game occasionally occur, via a process like mutation. To be an ESS, a strategy must be resistant to these alternatives.", "title": "Motivation" }, { "paragraph_id": 8, "text": "Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes.", "title": "Motivation" }, { "paragraph_id": 9, "text": "An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if for both players, for any strategy T:", "title": "Nash equilibrium" }, { "paragraph_id": 10, "text": "In this definition, a strategy T≠S can be a neutral alternative to S (scoring equally well, but not better). A Nash equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive for players to adopt T instead of S. This fact represents the point of departure of the ESS.", "title": "Nash equilibrium" }, { "paragraph_id": 11, "text": "Maynard Smith and Price specify two conditions for a strategy S to be an ESS. For all T≠S, either", "title": "Nash equilibrium" }, { "paragraph_id": 12, "text": "The first condition is sometimes called a strict Nash equilibrium. The second is sometimes called \"Maynard Smith's second condition\". The second condition means that although strategy T is neutral with respect to the payoff against strategy S, the population of players who continue to play strategy S has an advantage when playing against T.", "title": "Nash equilibrium" }, { "paragraph_id": 13, "text": "There is also an alternative, stronger definition of ESS, due to Thomas. This places a different emphasis on the role of the Nash equilibrium concept in the ESS concept. Following the terminology given in the first definition above, this definition requires that for all T≠S", "title": "Nash equilibrium" }, { "paragraph_id": 14, "text": "In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example, each pure strategy in the coordination game below is an ESS by the first definition but not the second.", "title": "Nash equilibrium" }, { "paragraph_id": 15, "text": "In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher than (or equal to) the payoff of the first player when he changes to another strategy T and the second player keeps his strategy S and the payoff of the first player when only his opponent changes his strategy to T is higher than his payoff in case that both of players change their strategies to T.", "title": "Nash equilibrium" }, { "paragraph_id": 16, "text": "This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a natural definition of related concepts such as a weak ESS or an evolutionarily stable set.", "title": "Nash equilibrium" }, { "paragraph_id": 17, "text": "In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the prisoner's dilemma there is only one Nash equilibrium, and its strategy (Defect) is also an ESS.", "title": "Nash equilibrium" }, { "paragraph_id": 18, "text": "Some games may have Nash equilibria that are not ESSes. For example, in harm thy neighbor (whose payoff matrix is shown here) both (A, A) and (B, B) are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists and predominate, because B scores higher against B than A does against B. This dynamic is captured by Maynard Smith's second condition, since E(A, A) = E(B, A), but it is not the case that E(A,B) > E(B,B).", "title": "Nash equilibrium" }, { "paragraph_id": 19, "text": "Nash equilibria with equally scoring alternatives can be ESSes. For example, in the game Harm everyone, C is an ESS because it satisfies Maynard Smith's second condition. D strategists may temporarily invade a population of C strategists by scoring equally well against C, but they pay a price when they begin to play against each other; C scores better against D than does D. So here although E(C, C) = E(D, C), it is also the case that E(C,D) > E(D,D). As a result, C is an ESS.", "title": "Nash equilibrium" }, { "paragraph_id": 20, "text": "Even if a game has pure strategy Nash equilibria, it might be that none of those pure strategies are ESS. Consider the Game of chicken. There are two pure strategy Nash equilibria in this game (Swerve, Stay) and (Stay, Swerve). However, in the absence of an uncorrelated asymmetry, neither Swerve nor Stay are ESSes. There is a third Nash equilibrium, a mixed strategy which is an ESS for this game (see Hawk-dove game and Best response for explanation).", "title": "Nash equilibrium" }, { "paragraph_id": 21, "text": "This last example points to an important difference between Nash equilibria and ESS. Nash equilibria are defined on strategy sets (a specification of a strategy for each player), while ESS are defined in terms of strategies themselves. The equilibria defined by ESS must always be symmetric, and thus have fewer equilibrium points.", "title": "Nash equilibrium" }, { "paragraph_id": 22, "text": "In population biology, the two concepts of an evolutionarily stable strategy (ESS) and an evolutionarily stable state are closely linked but describe different situations.", "title": "Vs. evolutionarily stable state" }, { "paragraph_id": 23, "text": "In an evolutionarily stable strategy, if all the members of a population adopt it, no mutant strategy can invade. Once virtually all members of the population use this strategy, there is no 'rational' alternative. ESS is part of classical game theory.", "title": "Vs. evolutionarily stable state" }, { "paragraph_id": 24, "text": "In an evolutionarily stable state, a population's genetic composition is restored by selection after a disturbance, if the disturbance is not too large. An evolutionarily stable state is a dynamic property of a population that returns to using a strategy, or mix of strategies, if it is perturbed from that initial state. It is part of population genetics, dynamical system, or evolutionary game theory. This is now called convergent stability.", "title": "Vs. evolutionarily stable state" }, { "paragraph_id": 25, "text": "B. Thomas (1984) applies the term ESS to an individual strategy which may be mixed, and evolutionarily stable population state to a population mixture of pure strategies which may be formally equivalent to the mixed ESS.", "title": "Vs. evolutionarily stable state" }, { "paragraph_id": 26, "text": "Whether a population is evolutionarily stable does not relate to its genetic diversity: it can be genetically monomorphic or polymorphic.", "title": "Vs. evolutionarily stable state" }, { "paragraph_id": 27, "text": "In the classic definition of an ESS, no mutant strategy can invade. In finite populations, any mutant could in principle invade, albeit at low probability, implying that no ESS can exist. In an infinite population, an ESS can instead be defined as a strategy which, should it become invaded by a new mutant strategy with probability p, would be able to counterinvade from a single starting individual with probability >p, as illustrated by the evolution of bet-hedging.", "title": "Stochastic ESS" }, { "paragraph_id": 28, "text": "A common model of altruism and social cooperation is the Prisoner's dilemma. Here a group of players would collectively be better off if they could play Cooperate, but since Defect fares better each individual player has an incentive to play Defect. One solution to this problem is to introduce the possibility of retaliation by having individuals play the game repeatedly against the same player. In the so-called iterated Prisoner's dilemma, the same two individuals play the prisoner's dilemma over and over. While the Prisoner's dilemma has only two strategies (Cooperate and Defect), the iterated Prisoner's dilemma has a huge number of possible strategies. Since an individual can have different contingency plan for each history and the game may be repeated an indefinite number of times, there may in fact be an infinite number of such contingency plans.", "title": "Prisoner's dilemma" }, { "paragraph_id": 29, "text": "Three simple contingency plans which have received substantial attention are Always Defect, Always Cooperate, and Tit for Tat. The first two strategies do the same thing regardless of the other player's actions, while the latter responds on the next round by doing what was done to it on the previous round—it responds to Cooperate with Cooperate and Defect with Defect.", "title": "Prisoner's dilemma" }, { "paragraph_id": 30, "text": "If the entire population plays Tit-for-Tat and a mutant arises who plays Always Defect, Tit-for-Tat will outperform Always Defect. If the population of the mutant becomes too large — the percentage of the mutant will be kept small. Tit for Tat is therefore an ESS, with respect to only these two strategies. On the other hand, an island of Always Defect players will be stable against the invasion of a few Tit-for-Tat players, but not against a large number of them. If we introduce Always Cooperate, a population of Tit-for-Tat is no longer an ESS. Since a population of Tit-for-Tat players always cooperates, the strategy Always Cooperate behaves identically in this population. As a result, a mutant who plays Always Cooperate will not be eliminated. However, even though a population of Always Cooperate and Tit-for-Tat can coexist, if there is a small percentage of the population that is Always Defect, the selective pressure is against Always Cooperate, and in favour of Tit-for-Tat. This is due to the lower payoffs of cooperating than those of defecting in case the opponent defects.", "title": "Prisoner's dilemma" }, { "paragraph_id": 31, "text": "This demonstrates the difficulties in applying the formal definition of an ESS to games with large strategy spaces, and has motivated some to consider alternatives.", "title": "Prisoner's dilemma" }, { "paragraph_id": 32, "text": "The fields of sociobiology and evolutionary psychology attempt to explain animal and human behavior and social structures, largely in terms of evolutionarily stable strategies. Sociopathy (chronic antisocial or criminal behavior) may be a result of a combination of two such strategies.", "title": "Human behavior" }, { "paragraph_id": 33, "text": "Evolutionarily stable strategies were originally considered for biological evolution, but they can apply to other contexts. In fact, there are stable states for a large class of adaptive dynamics. As a result, they can be used to explain human behaviours that lack any genetic influences.", "title": "Human behavior" } ]
An evolutionarily stable strategy (ESS) is a strategy that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science. In game-theoretical terms, an ESS is an equilibrium refinement of the Nash equilibrium, being a Nash equilibrium that is also "evolutionarily stable." Thus, once fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from replacing it.
2023-04-09T23:30:36Z
[ "Template:Infobox equilibrium", "Template:Cite journal", "Template:Cite encyclopedia", "Template:Dead link", "Template:Evolution", "Template:Reflist", "Template:Cite book", "Template:Game theory", "Template:Evolutionary psychology", "Template:Short description", "Template:Payoff matrix", "Template:Webarchive", "Template:ISBN", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Evolutionarily_stable_strategy
9,617
Element
Element or elements may refer to: bob joe and jamal
[ { "paragraph_id": 0, "text": "Element or elements may refer to: bob joe and jamal", "title": "" } ]
Element or elements may refer to: bob joe and jamal
2001-07-28T03:38:16Z
2023-11-02T17:42:32Z
[ "Template:Wiktionary", "Template:Look from", "Template:In title", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Element
9,619
Extremophile
An extremophile (from Latin extremus 'extreme', and Ancient Greek φιλία (philía) 'love') is an organism that is able to live (or in some cases thrive) in extreme environments, i.e., environments with conditions approaching or expanding the limits of what known life can adapt to, such as extreme temperature, radiation, salinity, or pH level. Since the definition of an extreme environment is relative to an arbitrarily defined standard, often an anthropocentric one, these organisms can be considered ecologically dominant in the evolutionary history of the planet. Some spores and cocooned bacteria samples have been dormant for more than 40 million years; extremophiles have continued to thrive in the most extreme conditions, making them one of the most abundant lifeforms. The study of extremophiles has expanded human knowledge of the limits of life, and informs speculation about extraterrestrial life. Extremophiles are also of interest because of their potential for bioremediation of environments made hazardous to humans due to pollution or contamination. In the 1980s and 1990s, biologists found that microbial life has great flexibility for surviving in extreme environments—niches that are acidic, extraordinarily hot, or with irregular air pressure for example—that would be completely inhospitable to complex organisms. Some scientists even concluded that life may have begun on Earth in hydrothermal vents far beneath the ocean's surface. According to astrophysicist Steinn Sigurdsson, "There are viable bacterial spores that have been found that are 40 million years old on Earth—and we know they're very hardened to radiation." Some bacteria were found living in the cold and dark in a lake buried a half-mile deep under the ice in Antarctica, and in the Marianas Trench, the deepest place in Earth's oceans. Expeditions of the International Ocean Discovery Program found microorganisms in 120 °C (248 °F) sediment that is 1.2 km (0.75 mi) below seafloor in the Nankai Trough subduction zone. Some microorganisms have been found thriving inside rocks up to 1,900 feet (580 m) below the sea floor under 8,500 feet (2,600 m) of ocean off the coast of the northwestern United States. According to one of the researchers, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are." A key to extremophile adaptation is their amino acid composition, affecting their protein folding ability under particular conditions. Studying extreme environments on Earth can help researchers understand the limits of habitability on other worlds. Tom Gheysens from Ghent University in Belgium and some of his colleagues have presented research findings that show spores from a species of Bacillus bacteria survived and were still viable after being heated to temperatures of 420 °C (788 °F). There are many classes of extremophiles that range all around the globe; each corresponding to the way its environmental niche differs from mesophilic conditions. These classifications are not exclusive. Many extremophiles fall under multiple categories and are classified as polyextremophiles. For example, organisms living inside hot rocks deep under Earth's surface are thermophilic and piezophilic such as Thermococcus barophilus. A polyextremophile living at the summit of a mountain in the Atacama Desert might be a radioresistant xerophile, a psychrophile, and an oligotroph. Polyextremophiles are well known for their ability to tolerate both high and low pH levels. Astrobiology is the multidisciplinary field that investigates the deterministic conditions and contingent events with which life arises, distributes, and evolves in the universe. Astrobiology makes use of physics, chemistry, astronomy, solar physics, biology, molecular biology, ecology, planetary science, geography, and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth. Astrobiologists are particularly interested in studying extremophiles, as it allows them to map what is known about the limits of life on Earth to potential extraterrestrial environments For example, analogous deserts of Antarctica are exposed to harmful UV radiation, low temperature, high salt concentration and low mineral concentration. These conditions are similar to those on Mars. Therefore, finding viable microbes in the subsurface of Antarctica suggests that there may be microbes surviving in endolithic communities and living under the Martian surface. Research indicates it is unlikely that Martian microbes exist on the surface or at shallow depths, but may be found at subsurface depths of around 100 meters. Recent research carried out on extremophiles in Japan involved a variety of bacteria including Escherichia coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g (i.e. 403,627 times the gravity experienced on Earth). P. denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth under these conditions of hyperacceleration which are usually found only in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. The research has implications on the feasibility of panspermia. On 26 April 2012, scientists reported that lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). On 29 April 2013, scientists at Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence". On 19 May 2014, scientists announced that numerous microbes, like Tersicoccus phoenicis, may be resistant to methods usually used in spacecraft assembly clean rooms. It is not currently known if such resistant microbes could have withstood space travel and are present on the Curiosity rover now on the planet Mars. On 20 August 2014, scientists confirmed the existence of microorganisms living half a mile below the ice of Antarctica. In September 2015, scientists from CNR-National Research Council of Italy reported that S. soflataricus was able to survive under Martian radiation at a wavelength that was considered extremely lethal to most bacteria. This discovery is significant because it indicates that not only bacterial spores, but also growing cells can be remarkably resistant to strong UV radiation. In June 2016, scientists from Brigham Young University conclusively reported that endospores of Bacillus subtilis were able to survive high speed impacts up to 299±28 m/s, extreme shock, and extreme deceleration. They pointed out that this feature might allow endospores to survive and to be transferred between planets by traveling within meteorites or by experiencing atmosphere disruption. Moreover, they suggested that the landing of spacecraft may also result in interplanetary spore transfer, given that spores can survive high-velocity impact while ejected from the spacecraft onto the planet surface. This is the first study which reported that bacteria can survive in such high-velocity impact. However, the lethal impact speed is unknown, and further experiments should be done by introducing higher-velocity impact to bacterial endospores. In August 2020 scientists reported that bacteria that feed on air discovered 2017 in Antarctica are likely not limited to Antarctica after discovering the two genes previously linked to their "atmospheric chemosynthesis" in soil of two other similar cold desert sites, which provides further information on this carbon sink and further strengthens the extremophile evidence that supports the potential existence of microbial life on alien planets. The same month, scientists reported that bacteria from Earth, particularly Deinococcus radiodurans, were found to survive for three years in outer space, based on studies on the International Space Station. These findings support the notion of panspermia. Extremophiles can also be useful players in the bioremediation of contaminated sites as some species are capable of biodegradation under conditions too extreme for classic bioremediation candidate species. Anthropogenic activity causes the release of pollutants that may potentially settle in extreme environments as is the case with tailings and sediment released from deep-sea mining activity. While most bacteria would be crushed by the pressure in these environments, piezophiles can tolerate these depths and can metabolize pollutants of concern if they possess bioremediation potential. There are multiple potential destinations for hydrocarbons after an oil spill has settled and currents routinely deposit them in extreme environments. Methane bubbles resulting from the Deepwater Horizon oil spill were found 1.1 kilometers below water surface level and at concentrations as high as 183 μmol per kilogram. The combination of low temperatures and high pressures in this environment result in low microbial activity. However, bacteria that are present including species of Pseudomonas, Aeromonas and Vibrio were found to be capable of bioremediation, albeit at a tenth of the speed they would perform at sea level pressure. Polycyclic aromatic hydrocarbons increase in solubility and bioavailability with increasing temperature. Thermophilic Thermus and Bacillus species have demonstrated higher gene expression for the alkane mono-oxygenase alkB at temperatures exceeding 60 °C (140 °F). The expression of this gene is a crucial precursor to the bioremediation process. Fungi that have been genetically modified with cold-adapted enzymes to tolerate differing pH levels and temperatures have been shown to be effective at remediating hydrocarbon contamination in freezing conditions in the Antarctic. Acidithiubacillus ferroxidans has been shown to be effective in remediating mercury in acidic soil due to its merA gene making it mercury resistant. Industrial effluent contain high levels of metals that can be detrimental to both human and ecosystem health. In extreme heat environments the extremophile Geobacillus thermodenitrificans has been shown to effectively manage the concentration of these metals within twelve hours of introduction. Some acidophilic microorganisms are effective at metal remediation in acidic environments due to proteins found in their periplasm, not present in any mesophilic organisms, allowing them to protect themselves from high proton concentrations. Rice paddies are highly oxidative environments that can produce high levels of lead or cadmium. Deinococcus radiodurans are resistant to the harsh conditions of the environment and are therefore candidate species for limiting the extent of contamination of these metals. Some bacteria are known to also use rare earth elements on their biological processes for example Methylacidiphilum fumariolicum, Methylorubrum extorquens and Methylobacterium radiotolerans are known to be able to use lanthanides as cofactors to increase their methanol dehydrogenase activity. Acid mine drainage is a major environmental concern associated with many metal mines. One of the most productive methods of its remediation is through the introduction of the extremophile organism Thiobacillus ferrooxidans. Any bacteria capable of inhabiting radioactive mediums can be classified as an extremophile. Radioresistant organisms are therefore critical in the bioremediation of radionuclides. Uranium is particularly challenging to contain when released into an environment and very harmful to both human and ecosystem health. The NANOBINDERS project is equipping bacteria that can survive in uranium rich environments with gene sequences that enable proteins to bind to uranium in mining effluent, making it more convenient to collect and dispose of. Some examples are Shewanella putrefaciens, Geobacter metallireducens and some strains of Burkholderia fungorum. Radiotrophic fungus, which use radiation as an energy source have been found inside and around the Chernobyl Nuclear Power Plant. Radioresistance has also been observed in certain species of macroscopic lifeforms. The lethal dose required to kill up to 50% of a tortoise population is 40,000 roentgens, compared to only 800 roentgens needed to kill 50% of a human population. In experiments exposing lepidopteran insects to gamma radiation, significant DNA damage was detected only at 20 Gy and higher doses, in contrast with human cells that showed similar damage at only 2 Gy. New sub-types of extremophiles are identified frequently and the sub-category list for extremophiles is always growing. For example, microbial life lives in the liquid asphalt lake, Pitch Lake. Research indicates that extremophiles inhabit the asphalt lake in populations ranging between 10 and 10 cells/gram. Likewise, until recently boron tolerance was unknown but a strong borophile was discovered in bacteria. With the recent isolation of Bacillus boroniphilus, borophiles came into discussion. Studying these borophiles may help illuminate the mechanisms of both boron toxicity and boron deficiency. In July 2019, a scientific study of Kidd Mine in Canada discovered sulfur-breathing organisms which live 7,900 feet (2,400 m) below the surface, and which breathe sulfur in order to survive. These organisms are also remarkable due to eating rocks such as pyrite as their regular food source. The thermoalkaliphilic catalase, which initiates the breakdown of hydrogen peroxide into oxygen and water, was isolated from an organism, Thermus brockianus, found in Yellowstone National Park by Idaho National Laboratory researchers. The catalase operates over a temperature range from 30 °C to over 94 °C and a pH range from 6–10. This catalase is extremely stable compared to other catalases at high temperatures and pH. In a comparative study, the T. brockianus catalase exhibited a half life of 15 days at 80 °C and pH 10 while a catalase derived from Aspergillus niger had a half life of 15 seconds under the same conditions. The catalase will have applications for removal of hydrogen peroxide in industrial processes such as pulp and paper bleaching, textile bleaching, food pasteurization, and surface decontamination of food packaging. DNA modifying enzymes such as Taq DNA polymerase and some Bacillus enzymes used in clinical diagnostics and starch liquefaction are produced commercially by several biotechnology companies. Over 65 prokaryotic species are known to be naturally competent for genetic transformation, the ability to transfer DNA from one cell to another cell followed by integration of the donor DNA into the recipient cell's chromosome. Several extremophiles are able to carry out species-specific DNA transfer, as described below. However, it is not yet clear how common such a capability is among extremophiles. The bacterium Deinococcus radiodurans is one of the most radioresistant organisms known. This bacterium can also survive cold, dehydration, vacuum and acid and is thus known as a polyextremophile. D. radiodurans is competent to perform genetic transformation. Recipient cells are able to repair DNA damage in donor transforming DNA that had been UV irradiated as efficiently as they repair cellular DNA when the cells themselves are irradiated. The extreme thermophilic bacterium Thermus thermophilus and other related Thermus species are also capable of genetic transformation. Halobacterium volcanii, an extreme halophilic (saline tolerant) archaeon, is capable of natural genetic transformation. Cytoplasmic bridges are formed between cells that appear to be used for DNA transfer from one cell to another in either direction. Sulfolobus solfataricus and Sulfolobus acidocaldarius are hyperthermophilic archaea. Exposure of these organisms to the DNA damaging agents UV irradiation, bleomycin or mitomycin C induces species-specific cellular aggregation. UV-induced cellular aggregation of S. acidocaldarius mediates chromosomal marker exchange with high frequency. Recombination rates exceed those of uninduced cultures by up to three orders of magnitude. Frols et al. and Ajon et al. hypothesized that cellular aggregation enhances species-specific DNA transfer between Sulfolobus cells in order to repair damaged DNA by means of homologous recombination. Van Wolferen et al. noted that this DNA exchange process may be crucial under DNA damaging conditions such as high temperatures. It has also been suggested that DNA transfer in Sulfolobus may be an early form of sexual interaction similar to the more well-studied bacterial transformation systems that involve species-specific DNA transfer leading to homologous recombinational repair of DNA damage (and see Transformation (genetics)). Extracellular membrane vesicles (MVs) might be involved in DNA transfer between different hyperthermophilic archaeal species. It has been shown that both plasmids and viral genomes can be transferred via MVs. Notably, a horizontal plasmid transfer has been documented between hyperthermophilic Thermococcus and Methanocaldococcus species, respectively belonging to the orders Thermococcales and Methanococcales.
[ { "paragraph_id": 0, "text": "An extremophile (from Latin extremus 'extreme', and Ancient Greek φιλία (philía) 'love') is an organism that is able to live (or in some cases thrive) in extreme environments, i.e., environments with conditions approaching or expanding the limits of what known life can adapt to, such as extreme temperature, radiation, salinity, or pH level.", "title": "" }, { "paragraph_id": 1, "text": "Since the definition of an extreme environment is relative to an arbitrarily defined standard, often an anthropocentric one, these organisms can be considered ecologically dominant in the evolutionary history of the planet. Some spores and cocooned bacteria samples have been dormant for more than 40 million years; extremophiles have continued to thrive in the most extreme conditions, making them one of the most abundant lifeforms. The study of extremophiles has expanded human knowledge of the limits of life, and informs speculation about extraterrestrial life. Extremophiles are also of interest because of their potential for bioremediation of environments made hazardous to humans due to pollution or contamination.", "title": "" }, { "paragraph_id": 2, "text": "In the 1980s and 1990s, biologists found that microbial life has great flexibility for surviving in extreme environments—niches that are acidic, extraordinarily hot, or with irregular air pressure for example—that would be completely inhospitable to complex organisms. Some scientists even concluded that life may have begun on Earth in hydrothermal vents far beneath the ocean's surface.", "title": "Characteristics" }, { "paragraph_id": 3, "text": "According to astrophysicist Steinn Sigurdsson, \"There are viable bacterial spores that have been found that are 40 million years old on Earth—and we know they're very hardened to radiation.\" Some bacteria were found living in the cold and dark in a lake buried a half-mile deep under the ice in Antarctica, and in the Marianas Trench, the deepest place in Earth's oceans. Expeditions of the International Ocean Discovery Program found microorganisms in 120 °C (248 °F) sediment that is 1.2 km (0.75 mi) below seafloor in the Nankai Trough subduction zone. Some microorganisms have been found thriving inside rocks up to 1,900 feet (580 m) below the sea floor under 8,500 feet (2,600 m) of ocean off the coast of the northwestern United States. According to one of the researchers, \"You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are.\" A key to extremophile adaptation is their amino acid composition, affecting their protein folding ability under particular conditions. Studying extreme environments on Earth can help researchers understand the limits of habitability on other worlds.", "title": "Characteristics" }, { "paragraph_id": 4, "text": "Tom Gheysens from Ghent University in Belgium and some of his colleagues have presented research findings that show spores from a species of Bacillus bacteria survived and were still viable after being heated to temperatures of 420 °C (788 °F).", "title": "Characteristics" }, { "paragraph_id": 5, "text": "There are many classes of extremophiles that range all around the globe; each corresponding to the way its environmental niche differs from mesophilic conditions. These classifications are not exclusive. Many extremophiles fall under multiple categories and are classified as polyextremophiles. For example, organisms living inside hot rocks deep under Earth's surface are thermophilic and piezophilic such as Thermococcus barophilus. A polyextremophile living at the summit of a mountain in the Atacama Desert might be a radioresistant xerophile, a psychrophile, and an oligotroph. Polyextremophiles are well known for their ability to tolerate both high and low pH levels.", "title": "Classifications" }, { "paragraph_id": 6, "text": "Astrobiology is the multidisciplinary field that investigates the deterministic conditions and contingent events with which life arises, distributes, and evolves in the universe. Astrobiology makes use of physics, chemistry, astronomy, solar physics, biology, molecular biology, ecology, planetary science, geography, and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth. Astrobiologists are particularly interested in studying extremophiles, as it allows them to map what is known about the limits of life on Earth to potential extraterrestrial environments For example, analogous deserts of Antarctica are exposed to harmful UV radiation, low temperature, high salt concentration and low mineral concentration. These conditions are similar to those on Mars. Therefore, finding viable microbes in the subsurface of Antarctica suggests that there may be microbes surviving in endolithic communities and living under the Martian surface. Research indicates it is unlikely that Martian microbes exist on the surface or at shallow depths, but may be found at subsurface depths of around 100 meters.", "title": "In astrobiology" }, { "paragraph_id": 7, "text": "Recent research carried out on extremophiles in Japan involved a variety of bacteria including Escherichia coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g (i.e. 403,627 times the gravity experienced on Earth). P. denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth under these conditions of hyperacceleration which are usually found only in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. The research has implications on the feasibility of panspermia.", "title": "In astrobiology" }, { "paragraph_id": 8, "text": "On 26 April 2012, scientists reported that lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR).", "title": "In astrobiology" }, { "paragraph_id": 9, "text": "On 29 April 2013, scientists at Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways \"not observed on Earth\" and in ways that \"can lead to increases in growth and virulence\".", "title": "In astrobiology" }, { "paragraph_id": 10, "text": "On 19 May 2014, scientists announced that numerous microbes, like Tersicoccus phoenicis, may be resistant to methods usually used in spacecraft assembly clean rooms. It is not currently known if such resistant microbes could have withstood space travel and are present on the Curiosity rover now on the planet Mars.", "title": "In astrobiology" }, { "paragraph_id": 11, "text": "On 20 August 2014, scientists confirmed the existence of microorganisms living half a mile below the ice of Antarctica.", "title": "In astrobiology" }, { "paragraph_id": 12, "text": "In September 2015, scientists from CNR-National Research Council of Italy reported that S. soflataricus was able to survive under Martian radiation at a wavelength that was considered extremely lethal to most bacteria. This discovery is significant because it indicates that not only bacterial spores, but also growing cells can be remarkably resistant to strong UV radiation.", "title": "In astrobiology" }, { "paragraph_id": 13, "text": "In June 2016, scientists from Brigham Young University conclusively reported that endospores of Bacillus subtilis were able to survive high speed impacts up to 299±28 m/s, extreme shock, and extreme deceleration. They pointed out that this feature might allow endospores to survive and to be transferred between planets by traveling within meteorites or by experiencing atmosphere disruption. Moreover, they suggested that the landing of spacecraft may also result in interplanetary spore transfer, given that spores can survive high-velocity impact while ejected from the spacecraft onto the planet surface. This is the first study which reported that bacteria can survive in such high-velocity impact. However, the lethal impact speed is unknown, and further experiments should be done by introducing higher-velocity impact to bacterial endospores.", "title": "In astrobiology" }, { "paragraph_id": 14, "text": "In August 2020 scientists reported that bacteria that feed on air discovered 2017 in Antarctica are likely not limited to Antarctica after discovering the two genes previously linked to their \"atmospheric chemosynthesis\" in soil of two other similar cold desert sites, which provides further information on this carbon sink and further strengthens the extremophile evidence that supports the potential existence of microbial life on alien planets.", "title": "In astrobiology" }, { "paragraph_id": 15, "text": "The same month, scientists reported that bacteria from Earth, particularly Deinococcus radiodurans, were found to survive for three years in outer space, based on studies on the International Space Station. These findings support the notion of panspermia.", "title": "In astrobiology" }, { "paragraph_id": 16, "text": "Extremophiles can also be useful players in the bioremediation of contaminated sites as some species are capable of biodegradation under conditions too extreme for classic bioremediation candidate species. Anthropogenic activity causes the release of pollutants that may potentially settle in extreme environments as is the case with tailings and sediment released from deep-sea mining activity. While most bacteria would be crushed by the pressure in these environments, piezophiles can tolerate these depths and can metabolize pollutants of concern if they possess bioremediation potential.", "title": "Bioremediation" }, { "paragraph_id": 17, "text": "There are multiple potential destinations for hydrocarbons after an oil spill has settled and currents routinely deposit them in extreme environments. Methane bubbles resulting from the Deepwater Horizon oil spill were found 1.1 kilometers below water surface level and at concentrations as high as 183 μmol per kilogram. The combination of low temperatures and high pressures in this environment result in low microbial activity. However, bacteria that are present including species of Pseudomonas, Aeromonas and Vibrio were found to be capable of bioremediation, albeit at a tenth of the speed they would perform at sea level pressure. Polycyclic aromatic hydrocarbons increase in solubility and bioavailability with increasing temperature. Thermophilic Thermus and Bacillus species have demonstrated higher gene expression for the alkane mono-oxygenase alkB at temperatures exceeding 60 °C (140 °F). The expression of this gene is a crucial precursor to the bioremediation process. Fungi that have been genetically modified with cold-adapted enzymes to tolerate differing pH levels and temperatures have been shown to be effective at remediating hydrocarbon contamination in freezing conditions in the Antarctic.", "title": "Bioremediation" }, { "paragraph_id": 18, "text": "Acidithiubacillus ferroxidans has been shown to be effective in remediating mercury in acidic soil due to its merA gene making it mercury resistant. Industrial effluent contain high levels of metals that can be detrimental to both human and ecosystem health. In extreme heat environments the extremophile Geobacillus thermodenitrificans has been shown to effectively manage the concentration of these metals within twelve hours of introduction. Some acidophilic microorganisms are effective at metal remediation in acidic environments due to proteins found in their periplasm, not present in any mesophilic organisms, allowing them to protect themselves from high proton concentrations. Rice paddies are highly oxidative environments that can produce high levels of lead or cadmium. Deinococcus radiodurans are resistant to the harsh conditions of the environment and are therefore candidate species for limiting the extent of contamination of these metals.", "title": "Bioremediation" }, { "paragraph_id": 19, "text": "Some bacteria are known to also use rare earth elements on their biological processes for example Methylacidiphilum fumariolicum, Methylorubrum extorquens and Methylobacterium radiotolerans are known to be able to use lanthanides as cofactors to increase their methanol dehydrogenase activity.", "title": "Bioremediation" }, { "paragraph_id": 20, "text": "Acid mine drainage is a major environmental concern associated with many metal mines. One of the most productive methods of its remediation is through the introduction of the extremophile organism Thiobacillus ferrooxidans.", "title": "Bioremediation" }, { "paragraph_id": 21, "text": "Any bacteria capable of inhabiting radioactive mediums can be classified as an extremophile. Radioresistant organisms are therefore critical in the bioremediation of radionuclides. Uranium is particularly challenging to contain when released into an environment and very harmful to both human and ecosystem health. The NANOBINDERS project is equipping bacteria that can survive in uranium rich environments with gene sequences that enable proteins to bind to uranium in mining effluent, making it more convenient to collect and dispose of. Some examples are Shewanella putrefaciens, Geobacter metallireducens and some strains of Burkholderia fungorum.", "title": "Bioremediation" }, { "paragraph_id": 22, "text": "Radiotrophic fungus, which use radiation as an energy source have been found inside and around the Chernobyl Nuclear Power Plant.", "title": "Bioremediation" }, { "paragraph_id": 23, "text": "Radioresistance has also been observed in certain species of macroscopic lifeforms. The lethal dose required to kill up to 50% of a tortoise population is 40,000 roentgens, compared to only 800 roentgens needed to kill 50% of a human population. In experiments exposing lepidopteran insects to gamma radiation, significant DNA damage was detected only at 20 Gy and higher doses, in contrast with human cells that showed similar damage at only 2 Gy.", "title": "Bioremediation" }, { "paragraph_id": 24, "text": "New sub-types of extremophiles are identified frequently and the sub-category list for extremophiles is always growing. For example, microbial life lives in the liquid asphalt lake, Pitch Lake. Research indicates that extremophiles inhabit the asphalt lake in populations ranging between 10 and 10 cells/gram. Likewise, until recently boron tolerance was unknown but a strong borophile was discovered in bacteria. With the recent isolation of Bacillus boroniphilus, borophiles came into discussion. Studying these borophiles may help illuminate the mechanisms of both boron toxicity and boron deficiency.", "title": "Examples and recent findings" }, { "paragraph_id": 25, "text": "In July 2019, a scientific study of Kidd Mine in Canada discovered sulfur-breathing organisms which live 7,900 feet (2,400 m) below the surface, and which breathe sulfur in order to survive. These organisms are also remarkable due to eating rocks such as pyrite as their regular food source.", "title": "Examples and recent findings" }, { "paragraph_id": 26, "text": "The thermoalkaliphilic catalase, which initiates the breakdown of hydrogen peroxide into oxygen and water, was isolated from an organism, Thermus brockianus, found in Yellowstone National Park by Idaho National Laboratory researchers. The catalase operates over a temperature range from 30 °C to over 94 °C and a pH range from 6–10. This catalase is extremely stable compared to other catalases at high temperatures and pH. In a comparative study, the T. brockianus catalase exhibited a half life of 15 days at 80 °C and pH 10 while a catalase derived from Aspergillus niger had a half life of 15 seconds under the same conditions. The catalase will have applications for removal of hydrogen peroxide in industrial processes such as pulp and paper bleaching, textile bleaching, food pasteurization, and surface decontamination of food packaging.", "title": "Biotechnology" }, { "paragraph_id": 27, "text": "DNA modifying enzymes such as Taq DNA polymerase and some Bacillus enzymes used in clinical diagnostics and starch liquefaction are produced commercially by several biotechnology companies.", "title": "Biotechnology" }, { "paragraph_id": 28, "text": "Over 65 prokaryotic species are known to be naturally competent for genetic transformation, the ability to transfer DNA from one cell to another cell followed by integration of the donor DNA into the recipient cell's chromosome. Several extremophiles are able to carry out species-specific DNA transfer, as described below. However, it is not yet clear how common such a capability is among extremophiles.", "title": "DNA transfer" }, { "paragraph_id": 29, "text": "The bacterium Deinococcus radiodurans is one of the most radioresistant organisms known. This bacterium can also survive cold, dehydration, vacuum and acid and is thus known as a polyextremophile. D. radiodurans is competent to perform genetic transformation. Recipient cells are able to repair DNA damage in donor transforming DNA that had been UV irradiated as efficiently as they repair cellular DNA when the cells themselves are irradiated. The extreme thermophilic bacterium Thermus thermophilus and other related Thermus species are also capable of genetic transformation.", "title": "DNA transfer" }, { "paragraph_id": 30, "text": "Halobacterium volcanii, an extreme halophilic (saline tolerant) archaeon, is capable of natural genetic transformation. Cytoplasmic bridges are formed between cells that appear to be used for DNA transfer from one cell to another in either direction.", "title": "DNA transfer" }, { "paragraph_id": 31, "text": "Sulfolobus solfataricus and Sulfolobus acidocaldarius are hyperthermophilic archaea. Exposure of these organisms to the DNA damaging agents UV irradiation, bleomycin or mitomycin C induces species-specific cellular aggregation. UV-induced cellular aggregation of S. acidocaldarius mediates chromosomal marker exchange with high frequency. Recombination rates exceed those of uninduced cultures by up to three orders of magnitude. Frols et al. and Ajon et al. hypothesized that cellular aggregation enhances species-specific DNA transfer between Sulfolobus cells in order to repair damaged DNA by means of homologous recombination. Van Wolferen et al. noted that this DNA exchange process may be crucial under DNA damaging conditions such as high temperatures. It has also been suggested that DNA transfer in Sulfolobus may be an early form of sexual interaction similar to the more well-studied bacterial transformation systems that involve species-specific DNA transfer leading to homologous recombinational repair of DNA damage (and see Transformation (genetics)).", "title": "DNA transfer" }, { "paragraph_id": 32, "text": "Extracellular membrane vesicles (MVs) might be involved in DNA transfer between different hyperthermophilic archaeal species. It has been shown that both plasmids and viral genomes can be transferred via MVs. Notably, a horizontal plasmid transfer has been documented between hyperthermophilic Thermococcus and Methanocaldococcus species, respectively belonging to the orders Thermococcales and Methanococcales.", "title": "DNA transfer" } ]
An extremophile is an organism that is able to live in extreme environments, i.e., environments with conditions approaching or expanding the limits of what known life can adapt to, such as extreme temperature, radiation, salinity, or pH level. Since the definition of an extreme environment is relative to an arbitrarily defined standard, often an anthropocentric one, these organisms can be considered ecologically dominant in the evolutionary history of the planet. Some spores and cocooned bacteria samples have been dormant for more than 40 million years; extremophiles have continued to thrive in the most extreme conditions, making them one of the most abundant lifeforms. The study of extremophiles has expanded human knowledge of the limits of life, and informs speculation about extraterrestrial life. Extremophiles are also of interest because of their potential for bioremediation of environments made hazardous to humans due to pollution or contamination.
2001-07-31T01:40:38Z
2023-12-28T00:32:02Z
[ "Template:Citation needed", "Template:Main", "Template:Reflist", "Template:Refbegin", "Template:Portal bar", "Template:Convert", "Template:Cite news", "Template:Cite book", "Template:Page needed", "Template:Extremophile", "Template:Short description", "Template:Use dmy dates", "Template:Etymology", "Template:See also", "Template:Webarchive", "Template:Refend", "Template:Center", "Template:Cvt", "Template:Incomplete list", "Template:Cite journal", "Template:Cite web", "Template:Cbignore", "Template:Citation", "Template:Cite magazine" ]
https://en.wikipedia.org/wiki/Extremophile
9,620
Education reform
Education reform is the name given to the goal of changing public education. The meaning and education methods have changed through debates over what content or experiences result in an educated individual or an educated society. Historically, the motivations for reform have not reflected the current needs of society. A consistent theme of reform includes the idea that large systematic changes to educational standards will produce social returns in citizens' health, wealth, and well-being. As part of the broader social and political processes, the term education reform refers to the chronology of significant, systematic revisions made to amend the educational legislation, standards, methodology, and policy affecting a nation's public school system to reflect the needs and values of contemporary society. 18th century, classical education instruction from an in-home personal tutor, hired at the family's expense, was primarily a privilege for children from wealthy families. Innovations such as encyclopedias, public libraries, and grammar schools all aimed to relieve some of the financial burden associated with the expenses of the classical education model. Motivations during the Victorian era emphasized the importance of self-improvement. Victorian education focused on teaching commercially valuable topics, such as modern languages and mathematics, rather than classical liberal arts subjects, such as Latin, art, and history. Motivations for education reformists like Horace Mann and his proponents focused on making schooling more accessible and developing a robust state-supported common school system. John Dewey, an early 20th-century reformer, focused on improving society by advocating for a scientific, pragmatic, or democratic principle-based curriculum. Whereas Maria Montessori incorporated humanistic motivations to "meet the needs of the child". In historic Prussia, a motivation to foster national unity led to formal education concentrated on teaching national language literacy to young children, resulting in Kindergarten. The history of educational pedagogy in the United States has ranged from teaching literacy and proficiency of religious doctrine to establishing cultural literacy, assimilating immigrants into a democratic society, producing a skilled labor force for the industrialized workplace, preparing students for careers, and competing in a global marketplace. Education inequality is also a motivation for education reform, seeking to address problems of a community. Education reform, in general, implies a continual effort to modify and improve the institution of education. Over time, as the needs and values of society change, attitudes towards public education change. As a social institution, education plays an integral role in the process of socialization. "Socialization is broadly composed of distinct inter- and intra-generational processes. Both involve the harmonization of an individual's attitudes and behaviors with that of their socio-cultural milieu." Educational matrices mean to reinforce those socially acceptable informal and formal norms, values, and beliefs that individuals need to learn in order to be accepted as good, functioning, and productive members of their society. Education reform is the process of constantly renegotiating and restructuring the educational standards to reflect the ever-evolving contemporary ideals of social, economic, and political culture. Reforms can be based on bringing education into alignment with a society's core values. Reforms that attempt to change a society's core values can connect alternative education initiatives with a network of other alternative institutions. Education reform has been pursued for a variety of specific reasons, but generally most reforms aim at redressing some societal ills, such as poverty-, gender-, or class-based inequities, or perceived ineffectiveness. Current education trends in the United States represent multiple achievement gaps across ethnicities, income levels, and geographies. As McKinsey and Company reported in a 2009 analysis, "These educational gaps impose on the United States the economic equivalent of a permanent national recession." Reforms are usually proposed by thinkers who aim to redress societal ills or institute societal changes, most often through a change in the education of the members of a class of people—the preparation of a ruling class to rule or a working class to work, the social hygiene of a lower or immigrant class, the preparation of citizens in a democracy or republic, etc. The idea that all children should be provided with a high level of education is a relatively recent idea, and has arisen largely in the context of Western democracy in the 20th century. The "beliefs" of school districts are optimistic that quite literally "all students will succeed", which in the context of high school graduation examination in the United States, all students in all groups, regardless of heritage or income will pass tests that in the introduction typically fall beyond the ability of all but the top 20 to 30 percent of students. The claims clearly renounce historical research that shows that all ethnic and income groups score differently on all standardized tests and standards based assessments and that students will achieve on a bell curve. Instead, education officials across the world believe that by setting clear, achievable, higher standards, aligning the curriculum, and assessing outcomes, learning can be increased for all students, and more students can succeed than the 50 percent who are defined to be above or below grade level by norm referenced standards. States have tried to use state schools to increase state power, especially to make better soldiers and workers. This strategy was first adopted to unify related linguistic groups in Europe, including France, Germany and Italy. Exact mechanisms are unclear, but it often fails in areas where populations are culturally segregated, as when the U.S. Indian school service failed to suppress Lakota and Navaho, or when a culture has widely respected autonomous cultural institutions, as when the Spanish failed to suppress Catalan. Many students of democracy have desired to improve education in order to improve the quality of governance in democratic societies; the necessity of good public education follows logically if one believes that the quality of democratic governance depends on the ability of citizens to make informed, intelligent choices, and that education can improve these abilities. Politically motivated educational reforms of the democratic type are recorded as far back as Plato in The Republic. In the United States, this lineage of democratic education reform was continued by Thomas Jefferson, who advocated ambitious reforms partly along Platonic lines for public schooling in Virginia. Another motivation for reform is the desire to address socio-economic problems, which many people see as having significant roots in lack of education. Starting in the 20th century, people have attempted to argue that small improvements in education can have large returns in such areas as health, wealth and well-being. For example, in Kerala, India in the 1950s, increases in women's health were correlated with increases in female literacy rates. In Iran, increased primary education was correlated with increased farming efficiencies and income. In both cases some researchers have concluded these correlations as representing an underlying causal relationship: education causes socio-economic benefits. In the case of Iran, researchers concluded that the improvements were due to farmers gaining reliable access to national crop prices and scientific farming information. As taught from the 18th to the 19th century, Western classical education curriculums focused on concrete details like "Who?", "What?", "When?", "Where?". Unless carefully taught, large group instruction naturally neglects asking the theoretical "Why?" and "Which?" questions that can be discussed in smaller groups. Classical education in this period also did not teach local (vernacular) languages and culture. Instead, it taught high-status ancient languages (Greek and Latin) and their cultures. This produced odd social effects in which an intellectual class might be more loyal to ancient cultures and institutions than to their native vernacular languages and their actual governing authorities. Jean-Jacques Rousseau, father of the Child Study Movement, centered the child as an object of study. In Emile: Or, On Education, Rousseau's principal work on education lays out an educational program for a hypothetical newborn's education through adulthood. Rousseau provided a dual critique of the educational vision outlined in Plato's Republic and that of his society in contemporary Europe. He regarded the educational methods contributing to the child's development; he held that a person could either be a man or a citizen. While Plato's plan could have brought the latter at the expense of the former, contemporary education failed at both tasks. He advocated a radical withdrawal of the child from society and an educational process that utilized the child's natural potential and curiosity, teaching the child by confronting them with simulated real-life obstacles and conditioning the child through experience rather intellectual instruction. Rousseau ideas were rarely implemented directly, but influenced later thinkers, particularly Johann Heinrich Pestalozzi and Friedrich Wilhelm August Fröbel, the inventor of the kindergarten. European and Asian nations regard education as essential to maintaining national, cultural, and linguistic unity. In the late 18th century (~1779), Prussia instituted primary school reforms expressly to teach a unified version of the national language, "Hochdeutsch". One significant reform was kindergarten whose purpose was to have the children participate in supervised activities taught by instructors who spoke the national language. The concept embraced the idea that children absorb new language skills more easily and quickly when they are young The current model of kindergarten is reflective of the Prussian model. In other countries, such as the Soviet Union, France, Spain, and Germany, the Prussian model has dramatically improved reading and math test scores for linguistic minorities. In the 19th century, before the advent of government-funded public schools, Protestant organizations established Charity Schools to educate the lower social classes. The Roman Catholic Church and governments later adopted the model. Designed to be inexpensive, Charity schools operated on minimal budgets and strived to serve as many needy children as possible. This led to the development of grammar schools, which primarily focused on teaching literacy, grammar, and bookkeeping skills so that the students could use books as an inexpensive resource to continue their education. Grammar was the first third of the then-prevalent system of classical education.. Educators Joseph Lancaster and Andrew Bell developed the monitorial system, also known as "mutual instruction" or the "Bell–Lancaster method". Their contemporary, educationalist and writer Elizabeth Hamilton, suggested that in some important aspects the method had been "anticipated" by the Belfast schoolmaster David Manson. In the 1760s Manson had developed a peer-teaching and monitoring system within the context of what he called a "play school" that dispensed with "the discipline of the rod". (More radically, Manson proposed the "liberty of each [child] to take the quantity [of lessons] agreeable to his inclination"). Lancaster, an impoverished Quaker during the early 19th century in London and Bell at the Madras School of India developed this model independent of one another. However, by design, their model utilizes more advanced students as a resource to teach the less advanced students; achieving student-teacher ratios as small as 1:2 and educating more than 1000 students per adult. The lack of adult supervision at the Lancaster school resulted in the older children acting as disciplinary monitors and taskmasters. To provide order and promote discipline the school implemented a unique internal economic system, inventing a currency called a Scrip. Although the currency was worthless in the outside world, it was created at a fixed exchange rate from a student's tuition and student's could use scrip to buy food, school supplies, books, and other items from the school store. Students could earn scrip through tutoring. To promote discipline, the school adopted a work-study model. Every job of the school was bid-for by students, with the largest bid winning. However, any student tutor could auction positions in his or her classes to earn scrip. The bids for student jobs paid for the adult supervision. Lancaster promoted his system in a piece called Improvements in Education that spread widely throughout the English-speaking world. Lancaster schools provided a grammar-school education with fully developed internal economies for a cost per student near $40 per year in 1999 U.S. dollars. To reduce cost and motivated to save up scrip, Lancaster students rented individual pages of textbooks from the school library instead of purchasing the textbook. Student's would read aloud their pages to groups. Students commonly exchanged tutoring and paid for items and services with receipts from down tutoring. The schools did not teach submission to orthodox Christian beliefs or government authorities. As a result, most English-speaking countries developed mandatory publicly paid education explicitly to keep public education in "responsible" hands. These elites said that Lancaster schools might become dishonest, provide poor education, and were not accountable to established authorities. Lancaster's supporters responded that any child could cheat given the opportunity, and that the government was not paying for the education and thus deserved no say in their composition. Though motivated by charity, Lancaster claimed in his pamphlets to be surprised to find that he lived well on the income of his school, even while the low costs made it available to the most impoverished street children. Ironically, Lancaster lived on the charity of friends in his later life. Although educational reform occurred on a local level at various points throughout history, the modern notion of education reform is tied with the spread of compulsory education. Economic growth and the spread of democracy raised the value of education and increased the importance of ensuring that all children and adults have access to free, high-quality, effective education. Modern education reforms are increasingly driven by a growing understanding of what works in education and how to go about successfully improving teaching and learning in schools. However, in some cases, the reformers' goals of "high-quality education" has meant "high-intensity education", with a narrow emphasis on teaching individual, test-friendly subskills quickly, regardless of long-term outcomes, developmental appropriateness, or broader educational goals. In the United States, Horace Mann (1796 – 1859) of Massachusetts used his political base and role as Secretary of the Massachusetts State Board of Education to promote public education in his home state and nationwide. Advocating a substantial public investment be made in education, Mann and his proponents developed a strong system of state supported common schools.. His crusading style attracted wide middle class support. Historian Ellwood P. Cubberley asserts: In 1852, Massachusetts passed a law making education mandatory. This model of free, accessible education spread throughout the country and in 1917 Mississippi was the final state to adopt the law. John Dewey, a philosopher and educator based in Chicago and New York, helped conceptualize the role of American and international education during the first four decades of the 20th century. An important member of the American Pragmatist movement, he carried the subordination of knowledge to action into the educational world by arguing for experiential education that would enable children to learn theory and practice simultaneously; a well-known example is the practice of teaching elementary physics and biology to students while preparing a meal. He was a harsh critic of "dead" knowledge disconnected from practical human life. Dewey criticized the rigidity and volume of humanistic education, and the emotional idealizations of education based on the child-study movement that had been inspired by Rousseau and those who followed him. Dewey understood that children are naturally active and curious and learn by doing. Dewey's understanding of logic is presented in his work "Logic, the Theory of Inquiry" (1938). His educational philosophies were presented in "My Pedagogic Creed", The School and Society, The Child and Curriculum, and Democracy and Education (1916). Bertrand Russell criticized Dewey's conception of logic, saying "What he calls "logic" does not seem to me to be part of logic at all; I should call it part of psychology." Dewey left the University of Chicago in 1904 over issues relating to the Dewey School. Dewey's influence began to decline in the time after the Second World War and particularly in the Cold War era, as more conservative educational policies came to the fore. The form of educational progressivism which was most successful in having its policies implemented has been dubbed "administrative progressivism" by historians. This began to be implemented in the early 20th century. While influenced particularly in its rhetoric by Dewey and even more by his popularizers, administrative progressivism was in its practice much more influenced by the Industrial Revolution and the concept economies of scale. The administrative progressives are responsible for many features of modern American education, especially American high schools: counseling programs, the move from many small local high schools to large centralized high schools, curricular differentiation in the form of electives and tracking, curricular, professional, and other forms of standardization, and an increase in state and federal regulation and bureaucracy, with a corresponding reduction of local control at the school board level. (Cf. "State, federal, and local control of education in the United States", below) (Tyack and Cuban, pp. 17–26) These reforms have since become heavily entrenched, and many today who identify themselves as progressives are opposed to many of them, while conservative education reform during the Cold War embraced them as a framework for strengthening traditional curriculum and standards. More recent methods, instituted by groups such as the think tank Reform's education division, and S.E.R. have attempted to pressure the government of the U.K. into more modernist educational reform, though this has met with limited success. In the United States, public education is characterized as "any federally funded primary or secondary school, administered to some extent by the government, and charged with educating all citizens. Although there is typically a cost to attend some public higher education institutions, they are still considered part of public education." In what would become the United States, the first public school was established in Boston, Massachusetts, on April 23, 1635. Puritan schoolmaster Philemon Pormont led instruction at the Boston Latin School. During this time, post-secondary education was a commonly utilized tool to distinguish one's social class and social status. Access to education was the "privilege of white, upper-class, Christian male children" in preparation for university education in ministry. In colonial America, to maintain Puritan religious traditions, formal and informal education instruction focused on teaching literacy. All colonists needed to understand the written language on some fundamental level in order to read the Bible and the colony's written secular laws. Religious leaders recognized that each person should be "educated enough to meet the individual needs of their station in life and social harmony." The first compulsory education laws were passed in Massachusetts between 1642 and 1648 when religious leaders noticed not all parents were providing their children with proper education. These laws stated that all towns with 50 or more families were obligated to hire a schoolmaster to teach children reading, writing, and basic arithmetic. "In 1642 the General Court passed a law that required heads of households to teach all their dependents — apprentices and servants as well as their own children — to read English or face a fine. Parents could provide the instruction themselves or hire someone else to do it. Selectmen were to keep 'a vigilant eye over their brethren and neighbors,' young people whose education was neglected could be removed from their parents or masters." The 1647 law eventually led to establishing publicly funded district schools in all Massachusetts towns, although, despite the threat of fines, compliance and quality of public schools were less than satisfactory. "Many towns were 'shamefully neglectful' of children's education. In 1718 '...by sad experience, it is found that many towns that not only are obliged by law, but are very able to support a grammar school, yet choose rather to incur and pay the fine or penalty than maintain a grammar school." When John Adams drafted the Massachusetts Constitution in 1780, he included provisions for a comprehensive education law that guaranteed public education to "all" citizens. However, access to formal education in secondary schools and colleges was reserved for free, white males. During the 17th and 18th centuries, females received little or no formal education except for home learning or attending Dame Schools. Likewise, many educational institutions maintained a policy of refusing to admit Black applicants. The Virginia Code of 1819 outlawed teaching enslaved people to read or write. Soon after the American Revolution, early leaders, like Thomas Jefferson and John Adams, proposed the creation of a more "formal and unified system of publicly funded schools" to satiate the need to "build and maintain commerce, agriculture and shipping interests". Their concept of free public education was not well received and did not begin to take hold on until the 1830s. However, in 1790, evolving socio-cultural ideals in the Commonwealth of Pennsylvania led to the first significant and systematic reform in education legislation that mandated economic conditions would not inhibit a child's access to education: "Constitution of the Commonwealth of Pennsylvania – 1790 ARTICLE VII Section I. The legislature shall, as soon as conveniently may be, provide, by law, for the establishment of schools throughout the state, in such manner that the poor may be taught gratis." During Reconstruction, from 1865 to 1877, African Americans worked to encourage public education in the South. With the U.S. Supreme Court decision in Plessy v. Ferguson, which held that "segregated public facilities were constitutional so long as the black and white facilities were equal to each other", this meant that African American children were legally allowed to attend public schools, although these schools were still segregated based on race. However, by the mid-twentieth century, civil rights groups would challenge racial segregation. During the second half of the nineteenth century (1870 and 1914), America's Industrial Revolution refocused the nation's attention on the need for a universally accessible public school system. Inventions, innovations, and improved production methods were critical to the continued growth of American manufacturing. To compete in the global economy, an overwhelming demand for literate workers that possessed practical training emerged. Citizens argued, "educating children of the poor and middle classes would prepare them to obtain good jobs, thereby strengthen the nation's economic position." Institutions became an essential tool in yielding ideal factory workers with sought-after attitudes and desired traits such as dependability, obedience, and punctuality. Vocationally oriented schools offered practical subjects like shop classes for students who were not planning to attend college for financial or other reasons. Not until the latter part of the 19th century did public elementary schools become available throughout the country. Although, it would be longer for children of color, girls, and children with special needs to attain access free public education. Systemic bias remained a formidable barrier. From the 1950s to the 1970s, many of the proposed and implemented reforms in U.S. education stemmed from the civil rights movement and related trends; examples include ending racial segregation, and busing for the purpose of desegregation, affirmative action, and banning of school prayer. In the early 1950s, most U.S. public schools operated under a legally sanctioned racial segregation system. Civil Rights reform movements sought to address the biases that ensure unequal distribution of academic resources such as school funding, qualified and experienced teachers, and learning materials to those socially excluded communities. In the early 1950s, the NAACP lawyers brought class-action lawsuits on behalf of black schoolchildren and their families in Kansas, South Carolina, Virginia, and Delaware, petitioning court orders to compel school districts to let black students attend white public schools. Finally, in 1954, the U.S. Supreme Court rejected that framework with Brown v. Board of Education and declared state-sponsored segregation of public schools unconstitutional. In 1964, Title VI of the Civil Rights Act "prohibited discrimination on the basis of race, color, and national origin in programs and activities receiving federal financial assistance." Educational institutions could now utilize public funds to implement in-service training programs to assist teachers and administrators in establishing desegregation plans. In 1965, the Higher Education Act (HEA) authorizes federal aid for postsecondary students. The Elementary and Secondary Education Act of 1965 (ESEA) represents the federal government's commitment to providing equal access to quality education; including those children from low-income families, limited English proficiency, and other minority groups. This legislation had positive retroactive implications for Historically Black Colleges and Universities, more commonly known as HBCUs. "The Higher Education Act of 1965, as amended, defines an HBCU as: "…any historically black college or university that was established prior to 1964, whose principal mission was, and is, the education of black Americans, and that is accredited by a nationally recognized accrediting agency or association determined by the Secretary [of Education] to be a reliable authority as to the quality of training offered or is, according to such an agency or association, making reasonable progress toward accreditation." Known as the Bilingual Education Act, Title VII of ESEA, offered federal aid to school districts to provide bilingual instruction for students with limited English speaking ability. The Education Amendments of 1972 (Public Law 92-318, 86 Stat. 327) establishes the Education Division in the U.S. Department of Health, Education, and Welfare and the National Institute of Education. Title IX of the Education Amendments of 1972 states, "No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance." Equal Educational Opportunities Act of 1974 - Civil Rights Amendments to the Elementary and Secondary Education Act of 1965: "Title I: Bilingual Education Act - Authorizes appropriations for carrying out the provisions of this Act. Establishes, in the Office of Education, an Office of Bilingual Education through which the Commissioner of Education shall carry out his functions relating to bilingual education. Authorizes appropriations for school nutrition and health services, correction education services, and ethnic heritage studies centers. Title II: Equal Educational Opportunities and the Transportation of Students: Equal Educational Opportunities Act - Provides that no state shall deny equal educational opportunity to an individual on account of his or her race, color, sex, or national origin by means of specified practices... Title IV: Consolidation of Certain Education Programs: Authorizes appropriations for use in various education programs including libraries and learning resources, education for use of the metric system of measurement, gifted and talented children programs, community schools, career education, consumers' education, women's equity in education programs, and arts in education programs. Community Schools Act - Authorizes the Commissioner to make grants to local educational agencies to assist in planning, establishing, expanding, and operating community education programs Women's Educational Equity Act - Establishes the Advisory Council on Women's Educational Programs and sets forth the composition of such Council. Authorizes the Commissioner of Education to make grants to, and enter into contracts with, public agencies, private nonprofit organizations, and individuals for activities designed to provide educational equity for women in the United States. Title V: Education Administration: Family Educational Rights and Privacy Act (FERPA)- Provides that no funds shall be made available under the General Education Provisions Act to any State or local educational agency or educational institution which denies or prevents the parents of students to inspect and review all records and files regarding their children. Title VII: National Reading Improvement Program: Authorizes the Commissioner to contract with State or local educational agencies for the carrying out by such agencies, in schools having large numbers of children with reading deficiencies, of demonstration projects involving the use of innovative methods, systems, materials, or programs which show promise of overcoming such reading deficiencies." In 1975, The Education for All Handicapped Children Act (Public Law 94-142) ensured that all handicapped children (age 3-21) receive a "free, appropriate public education" designed to meet their special needs. During the 1980s, some of the momentum of education reform moved from the left to the right, with the release of A Nation at Risk, Ronald Reagan's efforts to reduce or eliminate the United States Department of Education. "[T]he federal government and virtually all state governments, teacher training institutions, teachers' unions, major foundations, and the mass media have all pushed strenuously for higher standards, greater accountability, more "time on task," and more impressive academic results". Per the shift in educational motivation, families sought institutional alternatives, including "charter schools, progressive schools, Montessori schools, Waldorf schools, Afrocentric schools, religious schools - or home school instruction in their communities." In 1984 President Reagan enacted the Education for Economic Security Act In 1989, the Child Development and Education Act of 1989 authorized funds for Head Start Programs to include child care services. In the latter half of the decade, E. D. Hirsch put forth an influential attack on one or more versions of progressive education. Advocating an emphasis on "cultural literacy"—the facts, phrases, and texts. See also Uncommon Schools. In 1994, the land grant system was expanded via the Elementary and Secondary Education Act to include tribal colleges. Most states and districts in the 1990s adopted outcome-based education (OBE) in some form or another. A state would create a committee to adopt standards, and choose a quantitative instrument to assess whether the students knew the required content or could perform the required tasks. In 1992 The National Commission on Time and Learning, Extension revise funding for civic education programs and those educationally disadvantaged children." In 1994 the Improving America's Schools Act (IASA) reauthorized the Elementary and Secondary Education Act of 1965; amended as The Eisenhower Professional Development Program; IASA designated Title I funds for low income and otherwise marginalized groups; i.e., females, minorities, individuals with disabilities, individuals with limited English proficiency (LEP). By tethering federal funding distributions to student achievement, IASA meant use high stakes testing and curriculum standards to hold schools accountable for their results at the same level as other students. The Act significantly increased impact aid for the establishment of the Charter School Program, drug awareness campaigns, bilingual education, and technology. In 1998 The Charter School Expansion Act amended the Charter School Program, enacted in 1994. Consolidated Appropriations Act of 2001 appropriated funding to repair educational institution's buildings as well as repair and renovate charter school facilities, reauthorized the Even Start program, and enacted the Children's Internet Protection Act. The standards-based National Education Goals 2000, set by the U.S. Congress in the 1990s, were based on the principles of outcomes-based education. In 2002, the standards-based reform movement culminated as the No Child left Behind Act of 2001 where achievement standard were set by each individual state. This federal policy was active until 2015 in the United States . An article released by CBNC.com said a principal Senate Committee will take into account legislation that reauthorizes and modernizes the Carl D. Perkins Act. President George Bush approved this statute in 2006 on August 12, 2006. This new bill will emphasize the importance of federal funding for various Career and Technical (CTE) programs that will better provide learners with in-demand skills. Pell Grants are specific amount of money is given by the government every school year for disadvantaged students who need to pay tuition fees in college. At present, there are many initiatives aimed at dealing with these concerns like innovative cooperation between federal and state governments, educators, and the business sector. One of these efforts is the Pathways to Technology Early College High School (P-TECH). This six-year program was launched in cooperation with IBM, educators from three cities in New York, Chicago, and Connecticut, and over 400 businesses. The program offers students in high school and associate programs focusing on the STEM curriculum. The High School Involvement Partnership, private and public venture, was established through the help of Northrop Grumman, a global security firm. It has given assistance to some 7,000 high school students (juniors and seniors) since 1971 by means of one-on-one coaching as well as exposure to STEM areas and careers. The American Reinvestment and Recovery Act, enacted in 2009, reserved more than $85 billion in public funds to be used for education. The 2009 Council of Chief State School Officers and the National Governors Association launch the Common Core State Standards Initiative. In 2012 the Obama administration launched the Race to the Top competition aimed at spurring K–12 education reform through higher standards. "The Race to the Top – District competition will encourage transformative change within schools, targeted toward leveraging, enhancing, and improving classroom practices and resources. The four key areas of reform include: In 2015, under the Obama administration, many of the more restrictive elements that were enacted under No Child Left Behind (NCLB, 2001), were removed in the Every Student Succeeds Act (ESSA, 2015) which limits the role of the federal government in school liability. Every Student Succeeds Act reformed educational standards by "moving away from such high stakes and assessment based accountability models" and focused on assessing student achievement from a holistic approach by utilizing qualitative measures. Some argue that giving states more authority can help prevent considerable discrepancies in educational performance across different states. ESSA was approved by former President Obama in 2015 which amended and empowered the Elementary and Secondary Education Act of 1965. The Department of Education has the choice to carry out measures in drawing attention to said differences by pinpointing lowest-performing state governments and supplying information on the condition and progress of each state on different educational parameters. It can also provide reasonable funding along with technical aid to help states with similar demographics collaborate in improving their public education programs. This uses a methodology that values purposeful engagement in activities that turn students into self-reliant and efficient learners. Holding on to the view that everyone possesses natural gifts that are unique to one's personality (e.g. computational aptitude, musical talent, visual arts abilities), it likewise upholds the idea that children, despite their inexperience and tender age, are capable of coping with anguish, able to survive hardships, and can rise above difficult times. In 2017, Betsy DeVos was instated as the 11th Secretary of Education. A strong proponent of school choice, school voucher programs, and charter schools, DeVos was a much-contested choice as her own education and career had little to do with formal experience in the US education system. In a Republican-dominated senate, she received a 50–50 vote - a tie that was broken by Vice President Mike Pence. Prior to her appointment, DeVos received a BA degree in business economics from Calvin College in Grand Rapids, Michigan and she served as chairman of an investment management firm, The Windquest Group. She supported the idea of leaving education to state governments under the new K-12 legislation. DeVos cited the interventionist approach of the federal government to education policy following the signing of the ESSA. The primary approach to that rule has not changed significantly. Her opinion was that the education movement populist politics or populism encouraged reformers to commit promises which were not very realistic and therefore difficult to deliver. On July 31, 2018, President Donald Trump signed the Strengthening Career and Technical Education for the 21st Century Act (HR 2353) The Act reauthorized the Carl D. Perkins Career and Technical Education Act, a $1.2 billion program modified by the United States Congress in 2006. A move to change the Higher Education Act was also deferred. The legislation enacted on July 1, 2019, replaced the Carl D. Perkins Career and Technical Education (Perkins IV) Act of 2006. Stipulations in Perkins V enables school districts to make use of federal subsidies for all students' career search and development activities in the middle grades as well as comprehensive guidance and academic mentoring in the upper grades. At the same time, this law revised the meaning of "special populations" to include homeless persons, foster youth, those who left the foster care system, and children with parents on active duty in the United States armed forces. Another factor to consider in education reform is that of equity and access. Contemporary issues in the United States regarding education faces a history of inequalities that come with consequences for education attainment across different social groups. A history of racial, and subsequently class, segregation in the U.S. resulted from practices of law. Residential segregation is a direct result of twentieth century policies that separated by race using zoning and redlining practices, in addition to other housing policies, whose effects continue to endure in the United States. These neighborhoods that have been segregated de jure—by force of purposeful public policy at the federal, state, and local levels—disadvantage people of color as students must attend school near their homes. With the inception of the New Deal between 1933 and 1939, and during and following World War II, federally funded public housing was explicitly racially segregated by the local government in conjunction with federal policies through projects that were designated for Whites or Black Americans in the South, Northeast, Midwest, and West. Following an ease on the housing shortage post-World War II, the federal government subsidized the relocation of Whites to suburbs. The Federal Housing and Veterans Administration constructed such developments on the East Coast in towns like Levittown on Long Island, New Jersey, Pennsylvania, and Delaware. On the West Coast, there was Panorama City, Lakewood, Westlake, and Seattle suburbs developed by Bertha and William Boeing. As White families left for the suburbs, Black families remained in public housing and were explicitly placed in Black neighborhoods. Policies such as public housing director, Harold Ickes', "neighborhood composition rule" maintained this segregation by establishing that public housing must not interfere with pre-existing racial compositions of neighborhoods. Federal loan guarantees were given to builders who adhered to the condition that no sales were made to Black families and each deed prohibited re-sales to Black families, what the Federal Housing Administration (FHA) described as an "incompatible racial element". In addition, banks and savings intuitions refused loans to Black families in White suburbs and Black families in Black neighborhoods. In the mid-twentieth century, urban renewal programs forced low-income black residents to reside in places farther from universities, hospitals, or business districts and relocation options consisted of public housing high-rises and ghettos. This history of de jure segregation has impacted resource allocation for public education in the United States, with schools continuing to be segregated by race and class. Low-income White students are more likely than Black students to be integrated into middle-class neighborhoods and less likely to attend schools with other predominantly disadvantaged students. Students of color disproportionately attend underfunded schools and Title I schools in environments entrenched in environmental pollution and stagnant economic mobility with limited access to college readiness resources. According to research, schools attended by primarily Hispanic or African American students often have high turnover of teaching staff and are labeled high-poverty schools, in addition to having limited educational specialists, less available extracurricular opportunities, greater numbers of provisionally licensed teachers, little access to technology, and buildings that are not well maintained. With this segregation, more local property tax is allocated to wealthier communities and public schools' dependence on local property taxes has led to large disparities in funding between neighboring districts. The top 10% of wealthiest school districts spend approximately ten times more per student than the poorest 10% of school districts. This history of racial and socioeconomic class segregation in the U.S. has manifested into a racial wealth divide. With this history of geographic and economic segregation, trends illustrate a racial wealth gap that has impacted educational outcomes and its concomitant economic gains for minorities. Wealth or net worth—the difference between gross assets and debt—is a stock of financial resources and a significant indicator of financial security that offers a more complete measure of household capability and functioning than income. Within the same income bracket, the chance of completing college differs for White and Black students. Nationally, White students are at least 11% more likely to complete college across all four income groups. Intergenerational wealth is another result of this history, with White college-educated families three times as likely as Black families to get an inheritance of $10,000 or more. 10.6% of White children from low-income backgrounds and 2.5% of Black children from low-income backgrounds reach the top 20% of income distribution as adults. Less than 10% of Black children from low-income backgrounds reach the top 40%. These disadvantages facing students of color are apparent early on in early childhood education. By the age of five, children of color are impacted by opportunity gaps indicated by poverty, school readiness gap, segregated low-income neighborhoods, implicit bias, and inequalities within the justice system as Hispanic and African American boys account for as much as 60% of total prisoners within the incarceration population. These populations are also more likely to experience adverse childhood experiences (ACEs). High-quality early care and education are less accessible to children of color, particularly African American preschoolers as findings from the National Center for Education Statistics show that in 2013, 40% of Hispanic and 36% White children were enrolled in learning center-based classrooms rated as high, while 25% of African American children were enrolled in these programs. 15% of African American children attended low ranking center-based classrooms. In home-based settings, 30% of White children and over 50% of Hispanic and African American children attended low rated programs. In the first decade of the 21st century, several issues are salient in debates over further education reform: Charter schools public independent institutions in which both the cost and risk are fully funded by the taxpayers. Some charter schools are nonprofit in name only and are structured in ways that individuals and private enterprises connected to them can make money. Other charter schools are for-profit. In many cases, the public is largely unaware of this rapidly changing educational landscape, the debate between public and private/market approaches, and the decisions that are being made that affect their children and communities. Critics have accused for-profit entities, (education management organizations, EMOs) and private foundations such as the Bill and Melinda Gates Foundation, the Eli and Edythe Broad Foundation, and the Walton Family Foundation of funding Charter school initiatives to undermine public education and turn education into a "Business Model" which can make a profit. In some cases a school's charter is held by a non-profit that chooses to contract all of the school's operations to a third party, often a for-profit, CMO. This arrangement is defined as a vendor-operated school, (VOS). Economists, such as the late Nobel laureate Milton Friedman, advocate for school choice to promote excellence in education through competition and choice. A competitive market for schooling provides a workable method of accountability for results. Public education vouchers permit guardians to select and pay any school, public or private, with public funds that were formerly allocated directly to local public schools. The theory is that children's guardians will naturally shop for the best schools for their children, much as is already done at college level. Many reforms based on school choice have led to slight to moderate improvements. Some teachers' union members see those improvements as insufficient to offset the decreased teacher pay and job security. For instance, New Zealand's landmark reform in 1989, during which schools were granted substantial autonomy, funding was devolved to schools, and parents were given a free choice of which school their children would attend, led to moderate improvements in most schools. It was argued that the associated increases in inequity and greater racial stratification in schools nullified the educational gains. Others, however, argued that the original system created more inequity, due to lower income students being required to attend poorer performing inner city schools and not being allowed school choice or better educations that are available to higher income inhabitants of suburbs. Thus, it was argued that school choice promoted social mobility and increased test scores, especially in the cases of low income students. Similar results have been found in other jurisdictions. The small improvements produced by some school choice policies seem to reflect weaknesses in the ways that choice is implemented, rather than a failure of the basic principle itself. Critics of teacher tenure claim that the laws protect ineffective teachers from being fired, which can be detrimental to student success. Tenure laws vary from state to state, but generally they set a probationary period during which the teacher proves themselves worthy of the lifelong position. Probationary periods range from one to three years. Advocates for tenure reform often consider these periods too short to make such an important decision; especially when that decision is exceptionally hard to revoke. Due process restriction protect tenured teachers from being wrongfully fired; however these restrictions can also prevent administrators from removing ineffective or inappropriate teachers. A 2008 survey conducted by the US Department of Education found that, on average, only 2.1% of teachers are dismissed each year for poor performance. In October 2010 Apple Inc. CEO Steve Jobs had a consequential meeting with U.S. President Barack Obama to discuss U.S. competitiveness and the nation's education system. During the meeting Jobs recommended pursuing policies that would make it easier for school principals to hire and fire teachers based on merit. In 2012 tenure for school teachers was challenged in a California lawsuit called Vergara v. California. The primary issue in the case was the impact of tenure on student outcomes and on equity in education. On June 10, 2014, the trial judge ruled that California's teacher tenure statute produced disparities that " shock the conscience" and violate the equal protection clause of the California Constitution. On July 7, 2014, U.S. Secretary of Education Arne Duncan commented on the Vergara decision during a meeting with President Barack Obama and representatives of teacher's unions. Duncan said that tenure for school teachers "should be earned through demonstrated effectiveness" and should not be granted too quickly. Specifically, he criticized the 18-month tenure period at the heart of the Vergara case as being too short to be a "meaningful bar." According to a 2005 report from the OECD, the United States is tied for first place with Switzerland when it comes to annual spending per student on its public schools, with each of those two countries spending more than $11,000 (in U.S. currency). Despite this high level of funding, U.S. public schools lag behind the schools of other rich countries in the areas of reading, math, and science. A further analysis of developed countries shows no correlation between per student spending and student performance, suggesting that there are other factors influencing education. Top performers include Singapore, Finland and Korea, all with relatively low spending on education, while high spenders including Norway and Luxembourg have relatively low performance. One possible factor is the distribution of the funding. In the US, schools in wealthy areas tend to be over-funded while schools in poorer areas tend to be underfunded. These differences in spending between schools or districts may accentuate inequalities, if they result in the best teachers moving to teach in the most wealthy areas. The inequality between districts and schools led to 23 states instituting school finance reform based on adequacy standards that aim to increase funding to low-income districts. A 2018 study found that between 1990 and 2012, these finance reforms led to an increase in funding and test scores in the low income districts; which suggests finance reform is effective at bridging inter-district performance inequalities. It has also been shown that the socioeconomic situation of the students family has the most influence in determining success; suggesting that even if increased funds in a low income area increase performance, they may still perform worse than their peers from wealthier districts. Starting in the early 1980s, a series of analyses by Eric Hanushek indicated that the amount spent on schools bore little relationship to student learning. This controversial argument, which focused attention on how money was spent instead of how much was spent, led to lengthy scholarly exchanges. In part the arguments fed into the class size debates and other discussions of "input policies." It also moved reform efforts towards issues of school accountability (including No Child Left Behind) and the use of merit pay and other incentives. There have been studies that show smaller class sizes and newer buildings (both of which require higher funding to implement) lead to academic improvements. It should also be noted that many of the reform ideas that stray from the traditional format require greater funding. According to a 1999 article, William J. Bennett, former U.S. Secretary of Education, argued that increased levels of spending on public education have not made the schools better, citing the following statistics: EDUCATION FOR ALL THROUGHOUT LIFE The EFA Assessment 2000 was launched in July 1998 with an aim to help countries to identify both problems and prospects for further progress of EFA, and to strengthen their capacity to improve and monitor the provision and outcomes of basic education. Some 179 countries set up National Assessment Groups which collected quantitative data focusing on eighteen core indicators and carried out case-studies to collect qualitative information. Education 2030 Agenda refers to the global commitment of the Education for All movement to ensure access to basic education for all. It is an essential part of the 2030 Agenda for Sustainable Development. The roadmap to achieve the Agenda is the Education 2030 Incheon Declaration and Framework for Action, which outlines how countries, working with UNESCO and global partners, can translate commitments into action. The United Nations, over 70 ministers, representatives of member-countries, bilateral and multilateral agencies, regional organizations, academic institutions, teachers, civil society, and the youth supported the Framework for Action of the Education 2030 platform. The Framework was described as the outcome of continuing consultation to provide guidance for countries in implementing this Agenda. At the same time, it mobilizes various stakeholders in the new education objectives, coordination, implementation process, funding, and review of Education 2030. In 1995, the minister of education, Sukavich Rangsitpol, launched a series of education reforms in 1995 with the intention of the education reform is to realize the potential of Thai people to develop themselves for a better quality of life and to develop the nation for a peaceful co-existence in the global community. Since December 1995, activities have been conducted in four main areas: · School reform. Efforts have been stepped up to standardize the quality of education in all levels and types of schools and educational institutions. Educational coverage has been expanded. · Teacher reform. Training and recruitment of teachers have been reformed urgently and comprehensively both in public and private schools. Educational administrators and personnel have been developed continuously. Curriculum reform. Curriculum and teaching-learning processes have been reformed on an urgent basis in order to raise educational quality of all types and levels. · Administrative reform. Through devolution, educational institutions have been empowered to make administrative decisions and to offer appropriate educational services which are as consistent as possible with the local lifestyle and conditions. Provincial organizations have been strengthened to facilitate devolution while private participation of the family and community have been promoted and supported. School-based management (SBM) in Thailand implemented in 1997 in the course of a reform aimed at overcoming a profound crisis in the educación system. Establish effective Provincial Education Councils with strong community membership. The purpose of decentralization is to ensure that local education needs are met, there should be a close relationship between community representatives and officials . Thus, decentralization will require a careful balance between the guidance of community selected representatives and government officials. To representing local needs and priorities. The 1995 Education Reform results in 40,000 schools under the Education Reform in 1997. Project were required to improve their school environment and encourage the local community to be involved in school administration and management. Those schools could later accepted 4.35 students aged between 3-17years old from poor families in remote areas .Thereafter Thailand was successfully established Education For All (EFA). Thus, Thailand received 1997 ACEID awards for excellence in education from UNESCO in 1997 According to UNESCO, Thailand education reform has led to the following results: World Bank report that after the 1997 Asian financial crisis Income in the northeast, the poorest part of Thailand, has risen by 46 percent from 1998 to 2006. Nationwide poverty fell from 21.3 to 11.3 percent. The learning crisis is the reality that while the majority of children around the world attend school, a large proportion of them are not learning. A World Bank study found that "53 percent of children in low- and middle-income countries cannot read and understand a simple story by the end of primary school." While schooling has increased rapidly over the last few decades, learning has not followed suit. Many practitioners and academics call for education system reform in order to address the learning needs of all children. The movement to use computers more in education naturally includes many unrelated ideas, methods, and pedagogies since there are many uses for digital computers. For example, the fact that computers are naturally good at math leads to the question of the use of calculators in math education. The Internet's communication capabilities make it potentially useful for collaboration, and foreign language learning. The computer's ability to simulate physical systems makes it potentially useful in teaching science. More often, however, debate of digital education reform centers around more general applications of computers to education, such as electronic test-taking and online classes. Another viable addition to digital education has been blended learning. In 2009, over 3 million K-12 students took an online course, compared to 2000 when 45,000 took an online course. Blended learning examples include pure online, blended, and traditional education. Research results show that the most effective learning takes place in a blended format. This allows children to view the lecture ahead of time and then spend class time practicing, refining, and applying what they have previously learned. The idea of creating artificial intelligence led some computer scientists to believe that teachers could be replaced by computers, through something like an expert system; however, attempts to accomplish this have predictably proved inflexible. The computer is now more understood to be a tool or assistant for the teacher and students. Harnessing the richness of the Internet is another goal. In some cases classrooms have been moved entirely online, while in other instances the goal is more to learn how the Internet can be more than a classroom. Web-based international educational software is under development by students at New York University, based on the belief that current educational institutions are too rigid: effective teaching is not routine, students are not passive, and questions of practice are not predictable or standardized. The software allows for courses tailored to an individual's abilities through frequent and automatic multiple intelligences assessments. Ultimate goals include assisting students to be intrinsically motivated to educate themselves, and aiding the student in self-actualization. Courses typically taught only in college are being reformatted so that they can be taught to any level of student, whereby elementary school students may learn the foundations of any topic they desire. Such a program has the potential to remove the bureaucratic inefficiencies of education in modern countries, and with the decreasing digital divide, help developing nations rapidly achieve a similar quality of education. With an open format similar to Wikipedia, any teacher may upload their courses online and a feedback system will help students choose relevant courses of the highest quality. Teachers can provide links in their digital courses to webcast videos of their lectures. Students will have personal academic profiles and a forum will allow students to pose complex questions, while simpler questions will be automatically answered by the software, which will bring you to a solution by searching through the knowledge database, which includes all available courses and topics. The 21st century ushered in the acceptance and encouragement of internet research conducted on college and university campuses, in homes, and even in gathering areas of shopping centers. Addition of cyber cafes on campuses and coffee shops, loaning of communication devices from libraries, and availability of more portable technology devices, opened up a world of educational resources. Availability of knowledge to the elite had always been obvious, yet provision of networking devices, even wireless gadget sign-outs from libraries, made availability of information an expectation of most persons. Cassandra B. Whyte researched the future of computer use on higher education campuses focusing on student affairs. Though at first seen as a data collection and outcome reporting tool, the use of computer technology in the classrooms, meeting areas, and homes continued to unfold. The sole dependence on paper resources for subject information diminished and e-books and articles, as well as online courses, were anticipated to become increasingly staple and affordable choices provided by higher education institutions according to Whyte in a 2002 presentation. Digitally "flipping" classrooms is a trend in digital education that has gained significant momentum. Will Richardson, author and visionary for the digital education realm, points to the not-so-distant future and the seemingly infinite possibilities for digital communication linked to improved education. Education on the whole, as a stand-alone entity, has been slow to embrace these changes. The use of web tools such as wikis, blogs, and social networking sites is tied to increasing overall effectiveness of digital education in schools. Examples exist of teacher and student success stories where learning has transcended the classroom and has reached far out into society. The media has been instrumental in pushing formal educational institutions to become savvier in their methods. Additionally, advertising has been (and continues to be) a vital force in shaping students and parents thought patterns. Technology is a dynamic entity that is constantly in flux. As time presses on, new technologies will continue to break paradigms that will reshape human thinking regarding technological innovation. This concept stresses a certain disconnect between teachers and learners and the growing chasm that started some time ago. Richardson asserts that traditional classroom's will essentially enter entropy unless teachers increase their comfort and proficiency with technology. Administrators are not exempt from the technological disconnect. They must recognize the existence of a younger generation of teachers who were born during the Digital Age and are very comfortable with technology. However, when old meets new, especially in a mentoring situation, conflict seems inevitable. Ironically, the answer to the outdated mentor may be digital collaboration with worldwide mentor webs; composed of individuals with creative ideas for the classroom. This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0 (license statement/permission). Text taken from Education Transforms Lives, 6, 8-9, UNESCO, UNESCO. UNESCO.
[ { "paragraph_id": 0, "text": "Education reform is the name given to the goal of changing public education. The meaning and education methods have changed through debates over what content or experiences result in an educated individual or an educated society. Historically, the motivations for reform have not reflected the current needs of society. A consistent theme of reform includes the idea that large systematic changes to educational standards will produce social returns in citizens' health, wealth, and well-being.", "title": "" }, { "paragraph_id": 1, "text": "As part of the broader social and political processes, the term education reform refers to the chronology of significant, systematic revisions made to amend the educational legislation, standards, methodology, and policy affecting a nation's public school system to reflect the needs and values of contemporary society. 18th century, classical education instruction from an in-home personal tutor, hired at the family's expense, was primarily a privilege for children from wealthy families. Innovations such as encyclopedias, public libraries, and grammar schools all aimed to relieve some of the financial burden associated with the expenses of the classical education model. Motivations during the Victorian era emphasized the importance of self-improvement. Victorian education focused on teaching commercially valuable topics, such as modern languages and mathematics, rather than classical liberal arts subjects, such as Latin, art, and history.", "title": "" }, { "paragraph_id": 2, "text": "Motivations for education reformists like Horace Mann and his proponents focused on making schooling more accessible and developing a robust state-supported common school system. John Dewey, an early 20th-century reformer, focused on improving society by advocating for a scientific, pragmatic, or democratic principle-based curriculum. Whereas Maria Montessori incorporated humanistic motivations to \"meet the needs of the child\". In historic Prussia, a motivation to foster national unity led to formal education concentrated on teaching national language literacy to young children, resulting in Kindergarten.", "title": "" }, { "paragraph_id": 3, "text": "The history of educational pedagogy in the United States has ranged from teaching literacy and proficiency of religious doctrine to establishing cultural literacy, assimilating immigrants into a democratic society, producing a skilled labor force for the industrialized workplace, preparing students for careers, and competing in a global marketplace. Education inequality is also a motivation for education reform, seeking to address problems of a community.", "title": "" }, { "paragraph_id": 4, "text": "Education reform, in general, implies a continual effort to modify and improve the institution of education. Over time, as the needs and values of society change, attitudes towards public education change. As a social institution, education plays an integral role in the process of socialization. \"Socialization is broadly composed of distinct inter- and intra-generational processes. Both involve the harmonization of an individual's attitudes and behaviors with that of their socio-cultural milieu.\" Educational matrices mean to reinforce those socially acceptable informal and formal norms, values, and beliefs that individuals need to learn in order to be accepted as good, functioning, and productive members of their society. Education reform is the process of constantly renegotiating and restructuring the educational standards to reflect the ever-evolving contemporary ideals of social, economic, and political culture. Reforms can be based on bringing education into alignment with a society's core values. Reforms that attempt to change a society's core values can connect alternative education initiatives with a network of other alternative institutions.", "title": "Motivations for education reform" }, { "paragraph_id": 5, "text": "Education reform has been pursued for a variety of specific reasons, but generally most reforms aim at redressing some societal ills, such as poverty-, gender-, or class-based inequities, or perceived ineffectiveness. Current education trends in the United States represent multiple achievement gaps across ethnicities, income levels, and geographies. As McKinsey and Company reported in a 2009 analysis, \"These educational gaps impose on the United States the economic equivalent of a permanent national recession.\" Reforms are usually proposed by thinkers who aim to redress societal ills or institute societal changes, most often through a change in the education of the members of a class of people—the preparation of a ruling class to rule or a working class to work, the social hygiene of a lower or immigrant class, the preparation of citizens in a democracy or republic, etc. The idea that all children should be provided with a high level of education is a relatively recent idea, and has arisen largely in the context of Western democracy in the 20th century.", "title": "Motivations for education reform" }, { "paragraph_id": 6, "text": "The \"beliefs\" of school districts are optimistic that quite literally \"all students will succeed\", which in the context of high school graduation examination in the United States, all students in all groups, regardless of heritage or income will pass tests that in the introduction typically fall beyond the ability of all but the top 20 to 30 percent of students. The claims clearly renounce historical research that shows that all ethnic and income groups score differently on all standardized tests and standards based assessments and that students will achieve on a bell curve. Instead, education officials across the world believe that by setting clear, achievable, higher standards, aligning the curriculum, and assessing outcomes, learning can be increased for all students, and more students can succeed than the 50 percent who are defined to be above or below grade level by norm referenced standards.", "title": "Motivations for education reform" }, { "paragraph_id": 7, "text": "States have tried to use state schools to increase state power, especially to make better soldiers and workers. This strategy was first adopted to unify related linguistic groups in Europe, including France, Germany and Italy. Exact mechanisms are unclear, but it often fails in areas where populations are culturally segregated, as when the U.S. Indian school service failed to suppress Lakota and Navaho, or when a culture has widely respected autonomous cultural institutions, as when the Spanish failed to suppress Catalan.", "title": "Motivations for education reform" }, { "paragraph_id": 8, "text": "Many students of democracy have desired to improve education in order to improve the quality of governance in democratic societies; the necessity of good public education follows logically if one believes that the quality of democratic governance depends on the ability of citizens to make informed, intelligent choices, and that education can improve these abilities.", "title": "Motivations for education reform" }, { "paragraph_id": 9, "text": "Politically motivated educational reforms of the democratic type are recorded as far back as Plato in The Republic. In the United States, this lineage of democratic education reform was continued by Thomas Jefferson, who advocated ambitious reforms partly along Platonic lines for public schooling in Virginia.", "title": "Motivations for education reform" }, { "paragraph_id": 10, "text": "Another motivation for reform is the desire to address socio-economic problems, which many people see as having significant roots in lack of education. Starting in the 20th century, people have attempted to argue that small improvements in education can have large returns in such areas as health, wealth and well-being. For example, in Kerala, India in the 1950s, increases in women's health were correlated with increases in female literacy rates. In Iran, increased primary education was correlated with increased farming efficiencies and income. In both cases some researchers have concluded these correlations as representing an underlying causal relationship: education causes socio-economic benefits. In the case of Iran, researchers concluded that the improvements were due to farmers gaining reliable access to national crop prices and scientific farming information.", "title": "Motivations for education reform" }, { "paragraph_id": 11, "text": "As taught from the 18th to the 19th century, Western classical education curriculums focused on concrete details like \"Who?\", \"What?\", \"When?\", \"Where?\". Unless carefully taught, large group instruction naturally neglects asking the theoretical \"Why?\" and \"Which?\" questions that can be discussed in smaller groups.", "title": "History" }, { "paragraph_id": 12, "text": "Classical education in this period also did not teach local (vernacular) languages and culture. Instead, it taught high-status ancient languages (Greek and Latin) and their cultures. This produced odd social effects in which an intellectual class might be more loyal to ancient cultures and institutions than to their native vernacular languages and their actual governing authorities.", "title": "History" }, { "paragraph_id": 13, "text": "Jean-Jacques Rousseau, father of the Child Study Movement, centered the child as an object of study.", "title": "History" }, { "paragraph_id": 14, "text": "In Emile: Or, On Education, Rousseau's principal work on education lays out an educational program for a hypothetical newborn's education through adulthood.", "title": "History" }, { "paragraph_id": 15, "text": "Rousseau provided a dual critique of the educational vision outlined in Plato's Republic and that of his society in contemporary Europe. He regarded the educational methods contributing to the child's development; he held that a person could either be a man or a citizen. While Plato's plan could have brought the latter at the expense of the former, contemporary education failed at both tasks. He advocated a radical withdrawal of the child from society and an educational process that utilized the child's natural potential and curiosity, teaching the child by confronting them with simulated real-life obstacles and conditioning the child through experience rather intellectual instruction.", "title": "History" }, { "paragraph_id": 16, "text": "Rousseau ideas were rarely implemented directly, but influenced later thinkers, particularly Johann Heinrich Pestalozzi and Friedrich Wilhelm August Fröbel, the inventor of the kindergarten.", "title": "History" }, { "paragraph_id": 17, "text": "European and Asian nations regard education as essential to maintaining national, cultural, and linguistic unity. In the late 18th century (~1779), Prussia instituted primary school reforms expressly to teach a unified version of the national language, \"Hochdeutsch\".", "title": "History" }, { "paragraph_id": 18, "text": "One significant reform was kindergarten whose purpose was to have the children participate in supervised activities taught by instructors who spoke the national language. The concept embraced the idea that children absorb new language skills more easily and quickly when they are young", "title": "History" }, { "paragraph_id": 19, "text": "The current model of kindergarten is reflective of the Prussian model.", "title": "History" }, { "paragraph_id": 20, "text": "In other countries, such as the Soviet Union, France, Spain, and Germany, the Prussian model has dramatically improved reading and math test scores for linguistic minorities.", "title": "History" }, { "paragraph_id": 21, "text": "In the 19th century, before the advent of government-funded public schools, Protestant organizations established Charity Schools to educate the lower social classes. The Roman Catholic Church and governments later adopted the model.", "title": "History" }, { "paragraph_id": 22, "text": "Designed to be inexpensive, Charity schools operated on minimal budgets and strived to serve as many needy children as possible. This led to the development of grammar schools, which primarily focused on teaching literacy, grammar, and bookkeeping skills so that the students could use books as an inexpensive resource to continue their education. Grammar was the first third of the then-prevalent system of classical education..", "title": "History" }, { "paragraph_id": 23, "text": "Educators Joseph Lancaster and Andrew Bell developed the monitorial system, also known as \"mutual instruction\" or the \"Bell–Lancaster method\". Their contemporary, educationalist and writer Elizabeth Hamilton, suggested that in some important aspects the method had been \"anticipated\" by the Belfast schoolmaster David Manson. In the 1760s Manson had developed a peer-teaching and monitoring system within the context of what he called a \"play school\" that dispensed with \"the discipline of the rod\". (More radically, Manson proposed the \"liberty of each [child] to take the quantity [of lessons] agreeable to his inclination\").", "title": "History" }, { "paragraph_id": 24, "text": "Lancaster, an impoverished Quaker during the early 19th century in London and Bell at the Madras School of India developed this model independent of one another. However, by design, their model utilizes more advanced students as a resource to teach the less advanced students; achieving student-teacher ratios as small as 1:2 and educating more than 1000 students per adult. The lack of adult supervision at the Lancaster school resulted in the older children acting as disciplinary monitors and taskmasters.", "title": "History" }, { "paragraph_id": 25, "text": "To provide order and promote discipline the school implemented a unique internal economic system, inventing a currency called a Scrip. Although the currency was worthless in the outside world, it was created at a fixed exchange rate from a student's tuition and student's could use scrip to buy food, school supplies, books, and other items from the school store. Students could earn scrip through tutoring. To promote discipline, the school adopted a work-study model. Every job of the school was bid-for by students, with the largest bid winning. However, any student tutor could auction positions in his or her classes to earn scrip. The bids for student jobs paid for the adult supervision.", "title": "History" }, { "paragraph_id": 26, "text": "Lancaster promoted his system in a piece called Improvements in Education that spread widely throughout the English-speaking world. Lancaster schools provided a grammar-school education with fully developed internal economies for a cost per student near $40 per year in 1999 U.S. dollars. To reduce cost and motivated to save up scrip, Lancaster students rented individual pages of textbooks from the school library instead of purchasing the textbook. Student's would read aloud their pages to groups. Students commonly exchanged tutoring and paid for items and services with receipts from down tutoring.", "title": "History" }, { "paragraph_id": 27, "text": "The schools did not teach submission to orthodox Christian beliefs or government authorities. As a result, most English-speaking countries developed mandatory publicly paid education explicitly to keep public education in \"responsible\" hands. These elites said that Lancaster schools might become dishonest, provide poor education, and were not accountable to established authorities. Lancaster's supporters responded that any child could cheat given the opportunity, and that the government was not paying for the education and thus deserved no say in their composition.", "title": "History" }, { "paragraph_id": 28, "text": "Though motivated by charity, Lancaster claimed in his pamphlets to be surprised to find that he lived well on the income of his school, even while the low costs made it available to the most impoverished street children. Ironically, Lancaster lived on the charity of friends in his later life.", "title": "History" }, { "paragraph_id": 29, "text": "Although educational reform occurred on a local level at various points throughout history, the modern notion of education reform is tied with the spread of compulsory education. Economic growth and the spread of democracy raised the value of education and increased the importance of ensuring that all children and adults have access to free, high-quality, effective education. Modern education reforms are increasingly driven by a growing understanding of what works in education and how to go about successfully improving teaching and learning in schools. However, in some cases, the reformers' goals of \"high-quality education\" has meant \"high-intensity education\", with a narrow emphasis on teaching individual, test-friendly subskills quickly, regardless of long-term outcomes, developmental appropriateness, or broader educational goals.", "title": "History" }, { "paragraph_id": 30, "text": "In the United States, Horace Mann (1796 – 1859) of Massachusetts used his political base and role as Secretary of the Massachusetts State Board of Education to promote public education in his home state and nationwide. Advocating a substantial public investment be made in education, Mann and his proponents developed a strong system of state supported common schools..", "title": "History" }, { "paragraph_id": 31, "text": "His crusading style attracted wide middle class support. Historian Ellwood P. Cubberley asserts:", "title": "History" }, { "paragraph_id": 32, "text": "In 1852, Massachusetts passed a law making education mandatory. This model of free, accessible education spread throughout the country and in 1917 Mississippi was the final state to adopt the law.", "title": "History" }, { "paragraph_id": 33, "text": "John Dewey, a philosopher and educator based in Chicago and New York, helped conceptualize the role of American and international education during the first four decades of the 20th century. An important member of the American Pragmatist movement, he carried the subordination of knowledge to action into the educational world by arguing for experiential education that would enable children to learn theory and practice simultaneously; a well-known example is the practice of teaching elementary physics and biology to students while preparing a meal. He was a harsh critic of \"dead\" knowledge disconnected from practical human life.", "title": "History" }, { "paragraph_id": 34, "text": "Dewey criticized the rigidity and volume of humanistic education, and the emotional idealizations of education based on the child-study movement that had been inspired by Rousseau and those who followed him. Dewey understood that children are naturally active and curious and learn by doing. Dewey's understanding of logic is presented in his work \"Logic, the Theory of Inquiry\" (1938). His educational philosophies were presented in \"My Pedagogic Creed\", The School and Society, The Child and Curriculum, and Democracy and Education (1916). Bertrand Russell criticized Dewey's conception of logic, saying \"What he calls \"logic\" does not seem to me to be part of logic at all; I should call it part of psychology.\"", "title": "History" }, { "paragraph_id": 35, "text": "Dewey left the University of Chicago in 1904 over issues relating to the Dewey School.", "title": "History" }, { "paragraph_id": 36, "text": "Dewey's influence began to decline in the time after the Second World War and particularly in the Cold War era, as more conservative educational policies came to the fore.", "title": "History" }, { "paragraph_id": 37, "text": "The form of educational progressivism which was most successful in having its policies implemented has been dubbed \"administrative progressivism\" by historians. This began to be implemented in the early 20th century. While influenced particularly in its rhetoric by Dewey and even more by his popularizers, administrative progressivism was in its practice much more influenced by the Industrial Revolution and the concept economies of scale.", "title": "History" }, { "paragraph_id": 38, "text": "The administrative progressives are responsible for many features of modern American education, especially American high schools: counseling programs, the move from many small local high schools to large centralized high schools, curricular differentiation in the form of electives and tracking, curricular, professional, and other forms of standardization, and an increase in state and federal regulation and bureaucracy, with a corresponding reduction of local control at the school board level. (Cf. \"State, federal, and local control of education in the United States\", below) (Tyack and Cuban, pp. 17–26)", "title": "History" }, { "paragraph_id": 39, "text": "These reforms have since become heavily entrenched, and many today who identify themselves as progressives are opposed to many of them, while conservative education reform during the Cold War embraced them as a framework for strengthening traditional curriculum and standards.", "title": "History" }, { "paragraph_id": 40, "text": "More recent methods, instituted by groups such as the think tank Reform's education division, and S.E.R. have attempted to pressure the government of the U.K. into more modernist educational reform, though this has met with limited success.", "title": "History" }, { "paragraph_id": 41, "text": "In the United States, public education is characterized as \"any federally funded primary or secondary school, administered to some extent by the government, and charged with educating all citizens. Although there is typically a cost to attend some public higher education institutions, they are still considered part of public education.\"", "title": "Public school reform in the United States" }, { "paragraph_id": 42, "text": "In what would become the United States, the first public school was established in Boston, Massachusetts, on April 23, 1635. Puritan schoolmaster Philemon Pormont led instruction at the Boston Latin School. During this time, post-secondary education was a commonly utilized tool to distinguish one's social class and social status. Access to education was the \"privilege of white, upper-class, Christian male children\" in preparation for university education in ministry.", "title": "Public school reform in the United States" }, { "paragraph_id": 43, "text": "In colonial America, to maintain Puritan religious traditions, formal and informal education instruction focused on teaching literacy. All colonists needed to understand the written language on some fundamental level in order to read the Bible and the colony's written secular laws. Religious leaders recognized that each person should be \"educated enough to meet the individual needs of their station in life and social harmony.\" The first compulsory education laws were passed in Massachusetts between 1642 and 1648 when religious leaders noticed not all parents were providing their children with proper education. These laws stated that all towns with 50 or more families were obligated to hire a schoolmaster to teach children reading, writing, and basic arithmetic.", "title": "Public school reform in the United States" }, { "paragraph_id": 44, "text": "\"In 1642 the General Court passed a law that required heads of households to teach all their dependents — apprentices and servants as well as their own children — to read English or face a fine. Parents could provide the instruction themselves or hire someone else to do it. Selectmen were to keep 'a vigilant eye over their brethren and neighbors,' young people whose education was neglected could be removed from their parents or masters.\"", "title": "Public school reform in the United States" }, { "paragraph_id": 45, "text": "The 1647 law eventually led to establishing publicly funded district schools in all Massachusetts towns, although, despite the threat of fines, compliance and quality of public schools were less than satisfactory.", "title": "Public school reform in the United States" }, { "paragraph_id": 46, "text": "\"Many towns were 'shamefully neglectful' of children's education. In 1718 '...by sad experience, it is found that many towns that not only are obliged by law, but are very able to support a grammar school, yet choose rather to incur and pay the fine or penalty than maintain a grammar school.\"", "title": "Public school reform in the United States" }, { "paragraph_id": 47, "text": "When John Adams drafted the Massachusetts Constitution in 1780, he included provisions for a comprehensive education law that guaranteed public education to \"all\" citizens. However, access to formal education in secondary schools and colleges was reserved for free, white males. During the 17th and 18th centuries, females received little or no formal education except for home learning or attending Dame Schools. Likewise, many educational institutions maintained a policy of refusing to admit Black applicants. The Virginia Code of 1819 outlawed teaching enslaved people to read or write.", "title": "Public school reform in the United States" }, { "paragraph_id": 48, "text": "Soon after the American Revolution, early leaders, like Thomas Jefferson and John Adams, proposed the creation of a more \"formal and unified system of publicly funded schools\" to satiate the need to \"build and maintain commerce, agriculture and shipping interests\". Their concept of free public education was not well received and did not begin to take hold on until the 1830s. However, in 1790, evolving socio-cultural ideals in the Commonwealth of Pennsylvania led to the first significant and systematic reform in education legislation that mandated economic conditions would not inhibit a child's access to education:", "title": "Public school reform in the United States" }, { "paragraph_id": 49, "text": "\"Constitution of the Commonwealth of Pennsylvania – 1790 ARTICLE VII Section I. The legislature shall, as soon as conveniently may be, provide, by law, for the establishment of schools throughout the state, in such manner that the poor may be taught gratis.\"", "title": "Public school reform in the United States" }, { "paragraph_id": 50, "text": "During Reconstruction, from 1865 to 1877, African Americans worked to encourage public education in the South. With the U.S. Supreme Court decision in Plessy v. Ferguson, which held that \"segregated public facilities were constitutional so long as the black and white facilities were equal to each other\", this meant that African American children were legally allowed to attend public schools, although these schools were still segregated based on race. However, by the mid-twentieth century, civil rights groups would challenge racial segregation.", "title": "Public school reform in the United States" }, { "paragraph_id": 51, "text": "During the second half of the nineteenth century (1870 and 1914), America's Industrial Revolution refocused the nation's attention on the need for a universally accessible public school system. Inventions, innovations, and improved production methods were critical to the continued growth of American manufacturing. To compete in the global economy, an overwhelming demand for literate workers that possessed practical training emerged. Citizens argued, \"educating children of the poor and middle classes would prepare them to obtain good jobs, thereby strengthen the nation's economic position.\" Institutions became an essential tool in yielding ideal factory workers with sought-after attitudes and desired traits such as dependability, obedience, and punctuality. Vocationally oriented schools offered practical subjects like shop classes for students who were not planning to attend college for financial or other reasons. Not until the latter part of the 19th century did public elementary schools become available throughout the country. Although, it would be longer for children of color, girls, and children with special needs to attain access free public education.", "title": "Public school reform in the United States" }, { "paragraph_id": 52, "text": "Systemic bias remained a formidable barrier. From the 1950s to the 1970s, many of the proposed and implemented reforms in U.S. education stemmed from the civil rights movement and related trends; examples include ending racial segregation, and busing for the purpose of desegregation, affirmative action, and banning of school prayer.", "title": "Public school reform in the United States" }, { "paragraph_id": 53, "text": "In the early 1950s, most U.S. public schools operated under a legally sanctioned racial segregation system. Civil Rights reform movements sought to address the biases that ensure unequal distribution of academic resources such as school funding, qualified and experienced teachers, and learning materials to those socially excluded communities. In the early 1950s, the NAACP lawyers brought class-action lawsuits on behalf of black schoolchildren and their families in Kansas, South Carolina, Virginia, and Delaware, petitioning court orders to compel school districts to let black students attend white public schools. Finally, in 1954, the U.S. Supreme Court rejected that framework with Brown v. Board of Education and declared state-sponsored segregation of public schools unconstitutional.", "title": "Public school reform in the United States" }, { "paragraph_id": 54, "text": "In 1964, Title VI of the Civil Rights Act \"prohibited discrimination on the basis of race, color, and national origin in programs and activities receiving federal financial assistance.\" Educational institutions could now utilize public funds to implement in-service training programs to assist teachers and administrators in establishing desegregation plans.", "title": "Public school reform in the United States" }, { "paragraph_id": 55, "text": "In 1965, the Higher Education Act (HEA) authorizes federal aid for postsecondary students.", "title": "Public school reform in the United States" }, { "paragraph_id": 56, "text": "The Elementary and Secondary Education Act of 1965 (ESEA) represents the federal government's commitment to providing equal access to quality education; including those children from low-income families, limited English proficiency, and other minority groups. This legislation had positive retroactive implications for Historically Black Colleges and Universities, more commonly known as HBCUs.", "title": "Public school reform in the United States" }, { "paragraph_id": 57, "text": "\"The Higher Education Act of 1965, as amended, defines an HBCU as: \"…any historically black college or university that was established prior to 1964, whose principal mission was, and is, the education of black Americans, and that is accredited by a nationally recognized accrediting agency or association determined by the Secretary [of Education] to be a reliable authority as to the quality of training offered or is, according to such an agency or association, making reasonable progress toward accreditation.\"", "title": "Public school reform in the United States" }, { "paragraph_id": 58, "text": "Known as the Bilingual Education Act, Title VII of ESEA, offered federal aid to school districts to provide bilingual instruction for students with limited English speaking ability.", "title": "Public school reform in the United States" }, { "paragraph_id": 59, "text": "The Education Amendments of 1972 (Public Law 92-318, 86 Stat. 327) establishes the Education Division in the U.S. Department of Health, Education, and Welfare and the National Institute of Education. Title IX of the Education Amendments of 1972 states, \"No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance.\"", "title": "Public school reform in the United States" }, { "paragraph_id": 60, "text": "Equal Educational Opportunities Act of 1974 - Civil Rights Amendments to the Elementary and Secondary Education Act of 1965:", "title": "Public school reform in the United States" }, { "paragraph_id": 61, "text": "\"Title I: Bilingual Education Act - Authorizes appropriations for carrying out the provisions of this Act. Establishes, in the Office of Education, an Office of Bilingual Education through which the Commissioner of Education shall carry out his functions relating to bilingual education. Authorizes appropriations for school nutrition and health services, correction education services, and ethnic heritage studies centers.", "title": "Public school reform in the United States" }, { "paragraph_id": 62, "text": "Title II: Equal Educational Opportunities and the Transportation of Students: Equal Educational Opportunities Act - Provides that no state shall deny equal educational opportunity to an individual on account of his or her race, color, sex, or national origin by means of specified practices...", "title": "Public school reform in the United States" }, { "paragraph_id": 63, "text": "Title IV: Consolidation of Certain Education Programs: Authorizes appropriations for use in various education programs including libraries and learning resources, education for use of the metric system of measurement, gifted and talented children programs, community schools, career education, consumers' education, women's equity in education programs, and arts in education programs.", "title": "Public school reform in the United States" }, { "paragraph_id": 64, "text": "Community Schools Act - Authorizes the Commissioner to make grants to local educational agencies to assist in planning, establishing, expanding, and operating community education programs", "title": "Public school reform in the United States" }, { "paragraph_id": 65, "text": "Women's Educational Equity Act - Establishes the Advisory Council on Women's Educational Programs and sets forth the composition of such Council. Authorizes the Commissioner of Education to make grants to, and enter into contracts with, public agencies, private nonprofit organizations, and individuals for activities designed to provide educational equity for women in the United States.", "title": "Public school reform in the United States" }, { "paragraph_id": 66, "text": "Title V: Education Administration: Family Educational Rights and Privacy Act (FERPA)- Provides that no funds shall be made available under the General Education Provisions Act to any State or local educational agency or educational institution which denies or prevents the parents of students to inspect and review all records and files regarding their children.", "title": "Public school reform in the United States" }, { "paragraph_id": 67, "text": "Title VII: National Reading Improvement Program: Authorizes the Commissioner to contract with State or local educational agencies for the carrying out by such agencies, in schools having large numbers of children with reading deficiencies, of demonstration projects involving the use of innovative methods, systems, materials, or programs which show promise of overcoming such reading deficiencies.\"", "title": "Public school reform in the United States" }, { "paragraph_id": 68, "text": "In 1975, The Education for All Handicapped Children Act (Public Law 94-142) ensured that all handicapped children (age 3-21) receive a \"free, appropriate public education\" designed to meet their special needs.", "title": "Public school reform in the United States" }, { "paragraph_id": 69, "text": "During the 1980s, some of the momentum of education reform moved from the left to the right, with the release of A Nation at Risk, Ronald Reagan's efforts to reduce or eliminate the United States Department of Education.", "title": "Public school reform in the United States" }, { "paragraph_id": 70, "text": "\"[T]he federal government and virtually all state governments, teacher training institutions, teachers' unions, major foundations, and the mass media have all pushed strenuously for higher standards, greater accountability, more \"time on task,\" and more impressive academic results\".", "title": "Public school reform in the United States" }, { "paragraph_id": 71, "text": "Per the shift in educational motivation, families sought institutional alternatives, including \"charter schools, progressive schools, Montessori schools, Waldorf schools, Afrocentric schools, religious schools - or home school instruction in their communities.\"", "title": "Public school reform in the United States" }, { "paragraph_id": 72, "text": "In 1984 President Reagan enacted the Education for Economic Security Act", "title": "Public school reform in the United States" }, { "paragraph_id": 73, "text": "In 1989, the Child Development and Education Act of 1989 authorized funds for Head Start Programs to include child care services.", "title": "Public school reform in the United States" }, { "paragraph_id": 74, "text": "In the latter half of the decade, E. D. Hirsch put forth an influential attack on one or more versions of progressive education. Advocating an emphasis on \"cultural literacy\"—the facts, phrases, and texts.", "title": "Public school reform in the United States" }, { "paragraph_id": 75, "text": "See also Uncommon Schools.", "title": "Public school reform in the United States" }, { "paragraph_id": 76, "text": "In 1994, the land grant system was expanded via the Elementary and Secondary Education Act to include tribal colleges.", "title": "Public school reform in the United States" }, { "paragraph_id": 77, "text": "Most states and districts in the 1990s adopted outcome-based education (OBE) in some form or another. A state would create a committee to adopt standards, and choose a quantitative instrument to assess whether the students knew the required content or could perform the required tasks.", "title": "Public school reform in the United States" }, { "paragraph_id": 78, "text": "In 1992 The National Commission on Time and Learning, Extension revise funding for civic education programs and those educationally disadvantaged children.\"", "title": "Public school reform in the United States" }, { "paragraph_id": 79, "text": "In 1994 the Improving America's Schools Act (IASA) reauthorized the Elementary and Secondary Education Act of 1965; amended as The Eisenhower Professional Development Program; IASA designated Title I funds for low income and otherwise marginalized groups; i.e., females, minorities, individuals with disabilities, individuals with limited English proficiency (LEP). By tethering federal funding distributions to student achievement, IASA meant use high stakes testing and curriculum standards to hold schools accountable for their results at the same level as other students. The Act significantly increased impact aid for the establishment of the Charter School Program, drug awareness campaigns, bilingual education, and technology.", "title": "Public school reform in the United States" }, { "paragraph_id": 80, "text": "In 1998 The Charter School Expansion Act amended the Charter School Program, enacted in 1994.", "title": "Public school reform in the United States" }, { "paragraph_id": 81, "text": "Consolidated Appropriations Act of 2001 appropriated funding to repair educational institution's buildings as well as repair and renovate charter school facilities, reauthorized the Even Start program, and enacted the Children's Internet Protection Act.", "title": "Public school reform in the United States" }, { "paragraph_id": 82, "text": "The standards-based National Education Goals 2000, set by the U.S. Congress in the 1990s, were based on the principles of outcomes-based education. In 2002, the standards-based reform movement culminated as the No Child left Behind Act of 2001 where achievement standard were set by each individual state. This federal policy was active until 2015 in the United States .", "title": "Public school reform in the United States" }, { "paragraph_id": 83, "text": "An article released by CBNC.com said a principal Senate Committee will take into account legislation that reauthorizes and modernizes the Carl D. Perkins Act. President George Bush approved this statute in 2006 on August 12, 2006. This new bill will emphasize the importance of federal funding for various Career and Technical (CTE) programs that will better provide learners with in-demand skills. Pell Grants are specific amount of money is given by the government every school year for disadvantaged students who need to pay tuition fees in college.", "title": "Public school reform in the United States" }, { "paragraph_id": 84, "text": "At present, there are many initiatives aimed at dealing with these concerns like innovative cooperation between federal and state governments, educators, and the business sector. One of these efforts is the Pathways to Technology Early College High School (P-TECH). This six-year program was launched in cooperation with IBM, educators from three cities in New York, Chicago, and Connecticut, and over 400 businesses. The program offers students in high school and associate programs focusing on the STEM curriculum. The High School Involvement Partnership, private and public venture, was established through the help of Northrop Grumman, a global security firm. It has given assistance to some 7,000 high school students (juniors and seniors) since 1971 by means of one-on-one coaching as well as exposure to STEM areas and careers.", "title": "Public school reform in the United States" }, { "paragraph_id": 85, "text": "The American Reinvestment and Recovery Act, enacted in 2009, reserved more than $85 billion in public funds to be used for education.", "title": "Public school reform in the United States" }, { "paragraph_id": 86, "text": "The 2009 Council of Chief State School Officers and the National Governors Association launch the Common Core State Standards Initiative.", "title": "Public school reform in the United States" }, { "paragraph_id": 87, "text": "In 2012 the Obama administration launched the Race to the Top competition aimed at spurring K–12 education reform through higher standards.", "title": "Public school reform in the United States" }, { "paragraph_id": 88, "text": "\"The Race to the Top – District competition will encourage transformative change within schools, targeted toward leveraging, enhancing, and improving classroom practices and resources.", "title": "Public school reform in the United States" }, { "paragraph_id": 89, "text": "The four key areas of reform include:", "title": "Public school reform in the United States" }, { "paragraph_id": 90, "text": "In 2015, under the Obama administration, many of the more restrictive elements that were enacted under No Child Left Behind (NCLB, 2001), were removed in the Every Student Succeeds Act (ESSA, 2015) which limits the role of the federal government in school liability. Every Student Succeeds Act reformed educational standards by \"moving away from such high stakes and assessment based accountability models\" and focused on assessing student achievement from a holistic approach by utilizing qualitative measures. Some argue that giving states more authority can help prevent considerable discrepancies in educational performance across different states. ESSA was approved by former President Obama in 2015 which amended and empowered the Elementary and Secondary Education Act of 1965. The Department of Education has the choice to carry out measures in drawing attention to said differences by pinpointing lowest-performing state governments and supplying information on the condition and progress of each state on different educational parameters. It can also provide reasonable funding along with technical aid to help states with similar demographics collaborate in improving their public education programs.", "title": "Public school reform in the United States" }, { "paragraph_id": 91, "text": "This uses a methodology that values purposeful engagement in activities that turn students into self-reliant and efficient learners. Holding on to the view that everyone possesses natural gifts that are unique to one's personality (e.g. computational aptitude, musical talent, visual arts abilities), it likewise upholds the idea that children, despite their inexperience and tender age, are capable of coping with anguish, able to survive hardships, and can rise above difficult times.", "title": "Public school reform in the United States" }, { "paragraph_id": 92, "text": "In 2017, Betsy DeVos was instated as the 11th Secretary of Education. A strong proponent of school choice, school voucher programs, and charter schools, DeVos was a much-contested choice as her own education and career had little to do with formal experience in the US education system. In a Republican-dominated senate, she received a 50–50 vote - a tie that was broken by Vice President Mike Pence. Prior to her appointment, DeVos received a BA degree in business economics from Calvin College in Grand Rapids, Michigan and she served as chairman of an investment management firm, The Windquest Group. She supported the idea of leaving education to state governments under the new K-12 legislation. DeVos cited the interventionist approach of the federal government to education policy following the signing of the ESSA. The primary approach to that rule has not changed significantly. Her opinion was that the education movement populist politics or populism encouraged reformers to commit promises which were not very realistic and therefore difficult to deliver.", "title": "Public school reform in the United States" }, { "paragraph_id": 93, "text": "On July 31, 2018, President Donald Trump signed the Strengthening Career and Technical Education for the 21st Century Act (HR 2353) The Act reauthorized the Carl D. Perkins Career and Technical Education Act, a $1.2 billion program modified by the United States Congress in 2006. A move to change the Higher Education Act was also deferred.", "title": "Public school reform in the United States" }, { "paragraph_id": 94, "text": "The legislation enacted on July 1, 2019, replaced the Carl D. Perkins Career and Technical Education (Perkins IV) Act of 2006. Stipulations in Perkins V enables school districts to make use of federal subsidies for all students' career search and development activities in the middle grades as well as comprehensive guidance and academic mentoring in the upper grades. At the same time, this law revised the meaning of \"special populations\" to include homeless persons, foster youth, those who left the foster care system, and children with parents on active duty in the United States armed forces.", "title": "Public school reform in the United States" }, { "paragraph_id": 95, "text": "Another factor to consider in education reform is that of equity and access. Contemporary issues in the United States regarding education faces a history of inequalities that come with consequences for education attainment across different social groups.", "title": "Barriers to reform" }, { "paragraph_id": 96, "text": "A history of racial, and subsequently class, segregation in the U.S. resulted from practices of law. Residential segregation is a direct result of twentieth century policies that separated by race using zoning and redlining practices, in addition to other housing policies, whose effects continue to endure in the United States. These neighborhoods that have been segregated de jure—by force of purposeful public policy at the federal, state, and local levels—disadvantage people of color as students must attend school near their homes.", "title": "Barriers to reform" }, { "paragraph_id": 97, "text": "With the inception of the New Deal between 1933 and 1939, and during and following World War II, federally funded public housing was explicitly racially segregated by the local government in conjunction with federal policies through projects that were designated for Whites or Black Americans in the South, Northeast, Midwest, and West. Following an ease on the housing shortage post-World War II, the federal government subsidized the relocation of Whites to suburbs. The Federal Housing and Veterans Administration constructed such developments on the East Coast in towns like Levittown on Long Island, New Jersey, Pennsylvania, and Delaware. On the West Coast, there was Panorama City, Lakewood, Westlake, and Seattle suburbs developed by Bertha and William Boeing. As White families left for the suburbs, Black families remained in public housing and were explicitly placed in Black neighborhoods. Policies such as public housing director, Harold Ickes', \"neighborhood composition rule\" maintained this segregation by establishing that public housing must not interfere with pre-existing racial compositions of neighborhoods. Federal loan guarantees were given to builders who adhered to the condition that no sales were made to Black families and each deed prohibited re-sales to Black families, what the Federal Housing Administration (FHA) described as an \"incompatible racial element\". In addition, banks and savings intuitions refused loans to Black families in White suburbs and Black families in Black neighborhoods. In the mid-twentieth century, urban renewal programs forced low-income black residents to reside in places farther from universities, hospitals, or business districts and relocation options consisted of public housing high-rises and ghettos.", "title": "Barriers to reform" }, { "paragraph_id": 98, "text": "This history of de jure segregation has impacted resource allocation for public education in the United States, with schools continuing to be segregated by race and class. Low-income White students are more likely than Black students to be integrated into middle-class neighborhoods and less likely to attend schools with other predominantly disadvantaged students. Students of color disproportionately attend underfunded schools and Title I schools in environments entrenched in environmental pollution and stagnant economic mobility with limited access to college readiness resources. According to research, schools attended by primarily Hispanic or African American students often have high turnover of teaching staff and are labeled high-poverty schools, in addition to having limited educational specialists, less available extracurricular opportunities, greater numbers of provisionally licensed teachers, little access to technology, and buildings that are not well maintained. With this segregation, more local property tax is allocated to wealthier communities and public schools' dependence on local property taxes has led to large disparities in funding between neighboring districts. The top 10% of wealthiest school districts spend approximately ten times more per student than the poorest 10% of school districts.", "title": "Barriers to reform" }, { "paragraph_id": 99, "text": "This history of racial and socioeconomic class segregation in the U.S. has manifested into a racial wealth divide. With this history of geographic and economic segregation, trends illustrate a racial wealth gap that has impacted educational outcomes and its concomitant economic gains for minorities. Wealth or net worth—the difference between gross assets and debt—is a stock of financial resources and a significant indicator of financial security that offers a more complete measure of household capability and functioning than income. Within the same income bracket, the chance of completing college differs for White and Black students. Nationally, White students are at least 11% more likely to complete college across all four income groups. Intergenerational wealth is another result of this history, with White college-educated families three times as likely as Black families to get an inheritance of $10,000 or more. 10.6% of White children from low-income backgrounds and 2.5% of Black children from low-income backgrounds reach the top 20% of income distribution as adults. Less than 10% of Black children from low-income backgrounds reach the top 40%.", "title": "Barriers to reform" }, { "paragraph_id": 100, "text": "These disadvantages facing students of color are apparent early on in early childhood education. By the age of five, children of color are impacted by opportunity gaps indicated by poverty, school readiness gap, segregated low-income neighborhoods, implicit bias, and inequalities within the justice system as Hispanic and African American boys account for as much as 60% of total prisoners within the incarceration population. These populations are also more likely to experience adverse childhood experiences (ACEs).", "title": "Barriers to reform" }, { "paragraph_id": 101, "text": "High-quality early care and education are less accessible to children of color, particularly African American preschoolers as findings from the National Center for Education Statistics show that in 2013, 40% of Hispanic and 36% White children were enrolled in learning center-based classrooms rated as high, while 25% of African American children were enrolled in these programs. 15% of African American children attended low ranking center-based classrooms. In home-based settings, 30% of White children and over 50% of Hispanic and African American children attended low rated programs.", "title": "Barriers to reform" }, { "paragraph_id": 102, "text": "In the first decade of the 21st century, several issues are salient in debates over further education reform:", "title": "Contemporary issues (United States)" }, { "paragraph_id": 103, "text": "Charter schools public independent institutions in which both the cost and risk are fully funded by the taxpayers. Some charter schools are nonprofit in name only and are structured in ways that individuals and private enterprises connected to them can make money. Other charter schools are for-profit. In many cases, the public is largely unaware of this rapidly changing educational landscape, the debate between public and private/market approaches, and the decisions that are being made that affect their children and communities. Critics have accused for-profit entities, (education management organizations, EMOs) and private foundations such as the Bill and Melinda Gates Foundation, the Eli and Edythe Broad Foundation, and the Walton Family Foundation of funding Charter school initiatives to undermine public education and turn education into a \"Business Model\" which can make a profit. In some cases a school's charter is held by a non-profit that chooses to contract all of the school's operations to a third party, often a for-profit, CMO. This arrangement is defined as a vendor-operated school, (VOS).", "title": "Contemporary issues (United States)" }, { "paragraph_id": 104, "text": "Economists, such as the late Nobel laureate Milton Friedman, advocate for school choice to promote excellence in education through competition and choice. A competitive market for schooling provides a workable method of accountability for results. Public education vouchers permit guardians to select and pay any school, public or private, with public funds that were formerly allocated directly to local public schools. The theory is that children's guardians will naturally shop for the best schools for their children, much as is already done at college level.", "title": "Contemporary issues (United States)" }, { "paragraph_id": 105, "text": "Many reforms based on school choice have led to slight to moderate improvements. Some teachers' union members see those improvements as insufficient to offset the decreased teacher pay and job security. For instance, New Zealand's landmark reform in 1989, during which schools were granted substantial autonomy, funding was devolved to schools, and parents were given a free choice of which school their children would attend, led to moderate improvements in most schools. It was argued that the associated increases in inequity and greater racial stratification in schools nullified the educational gains. Others, however, argued that the original system created more inequity, due to lower income students being required to attend poorer performing inner city schools and not being allowed school choice or better educations that are available to higher income inhabitants of suburbs. Thus, it was argued that school choice promoted social mobility and increased test scores, especially in the cases of low income students. Similar results have been found in other jurisdictions. The small improvements produced by some school choice policies seem to reflect weaknesses in the ways that choice is implemented, rather than a failure of the basic principle itself.", "title": "Contemporary issues (United States)" }, { "paragraph_id": 106, "text": "Critics of teacher tenure claim that the laws protect ineffective teachers from being fired, which can be detrimental to student success. Tenure laws vary from state to state, but generally they set a probationary period during which the teacher proves themselves worthy of the lifelong position. Probationary periods range from one to three years. Advocates for tenure reform often consider these periods too short to make such an important decision; especially when that decision is exceptionally hard to revoke. Due process restriction protect tenured teachers from being wrongfully fired; however these restrictions can also prevent administrators from removing ineffective or inappropriate teachers. A 2008 survey conducted by the US Department of Education found that, on average, only 2.1% of teachers are dismissed each year for poor performance.", "title": "Contemporary issues (United States)" }, { "paragraph_id": 107, "text": "In October 2010 Apple Inc. CEO Steve Jobs had a consequential meeting with U.S. President Barack Obama to discuss U.S. competitiveness and the nation's education system. During the meeting Jobs recommended pursuing policies that would make it easier for school principals to hire and fire teachers based on merit.", "title": "Contemporary issues (United States)" }, { "paragraph_id": 108, "text": "In 2012 tenure for school teachers was challenged in a California lawsuit called Vergara v. California. The primary issue in the case was the impact of tenure on student outcomes and on equity in education. On June 10, 2014, the trial judge ruled that California's teacher tenure statute produced disparities that \" shock the conscience\" and violate the equal protection clause of the California Constitution. On July 7, 2014, U.S. Secretary of Education Arne Duncan commented on the Vergara decision during a meeting with President Barack Obama and representatives of teacher's unions. Duncan said that tenure for school teachers \"should be earned through demonstrated effectiveness\" and should not be granted too quickly. Specifically, he criticized the 18-month tenure period at the heart of the Vergara case as being too short to be a \"meaningful bar.\"", "title": "Contemporary issues (United States)" }, { "paragraph_id": 109, "text": "According to a 2005 report from the OECD, the United States is tied for first place with Switzerland when it comes to annual spending per student on its public schools, with each of those two countries spending more than $11,000 (in U.S. currency). Despite this high level of funding, U.S. public schools lag behind the schools of other rich countries in the areas of reading, math, and science. A further analysis of developed countries shows no correlation between per student spending and student performance, suggesting that there are other factors influencing education. Top performers include Singapore, Finland and Korea, all with relatively low spending on education, while high spenders including Norway and Luxembourg have relatively low performance. One possible factor is the distribution of the funding.", "title": "Contemporary issues (United States)" }, { "paragraph_id": 110, "text": "In the US, schools in wealthy areas tend to be over-funded while schools in poorer areas tend to be underfunded. These differences in spending between schools or districts may accentuate inequalities, if they result in the best teachers moving to teach in the most wealthy areas. The inequality between districts and schools led to 23 states instituting school finance reform based on adequacy standards that aim to increase funding to low-income districts. A 2018 study found that between 1990 and 2012, these finance reforms led to an increase in funding and test scores in the low income districts; which suggests finance reform is effective at bridging inter-district performance inequalities. It has also been shown that the socioeconomic situation of the students family has the most influence in determining success; suggesting that even if increased funds in a low income area increase performance, they may still perform worse than their peers from wealthier districts.", "title": "Contemporary issues (United States)" }, { "paragraph_id": 111, "text": "Starting in the early 1980s, a series of analyses by Eric Hanushek indicated that the amount spent on schools bore little relationship to student learning. This controversial argument, which focused attention on how money was spent instead of how much was spent, led to lengthy scholarly exchanges. In part the arguments fed into the class size debates and other discussions of \"input policies.\" It also moved reform efforts towards issues of school accountability (including No Child Left Behind) and the use of merit pay and other incentives.", "title": "Contemporary issues (United States)" }, { "paragraph_id": 112, "text": "There have been studies that show smaller class sizes and newer buildings (both of which require higher funding to implement) lead to academic improvements. It should also be noted that many of the reform ideas that stray from the traditional format require greater funding.", "title": "Contemporary issues (United States)" }, { "paragraph_id": 113, "text": "According to a 1999 article, William J. Bennett, former U.S. Secretary of Education, argued that increased levels of spending on public education have not made the schools better, citing the following statistics:", "title": "Contemporary issues (United States)" }, { "paragraph_id": 114, "text": "EDUCATION FOR ALL THROUGHOUT LIFE", "title": "Internationally" }, { "paragraph_id": 115, "text": "The EFA Assessment 2000 was launched in July 1998 with an aim to help countries to identify both problems and prospects for further progress of EFA, and to strengthen their capacity to improve and monitor the provision and outcomes of basic education. Some 179 countries set up National Assessment Groups which collected quantitative data focusing on eighteen core indicators and carried out case-studies to collect qualitative information.", "title": "Internationally" }, { "paragraph_id": 116, "text": "", "title": "Internationally" }, { "paragraph_id": 117, "text": "Education 2030 Agenda refers to the global commitment of the Education for All movement to ensure access to basic education for all. It is an essential part of the 2030 Agenda for Sustainable Development. The roadmap to achieve the Agenda is the Education 2030 Incheon Declaration and Framework for Action, which outlines how countries, working with UNESCO and global partners, can translate commitments into action.", "title": "Internationally" }, { "paragraph_id": 118, "text": "The United Nations, over 70 ministers, representatives of member-countries, bilateral and multilateral agencies, regional organizations, academic institutions, teachers, civil society, and the youth supported the Framework for Action of the Education 2030 platform. The Framework was described as the outcome of continuing consultation to provide guidance for countries in implementing this Agenda. At the same time, it mobilizes various stakeholders in the new education objectives, coordination, implementation process, funding, and review of Education 2030.", "title": "Internationally" }, { "paragraph_id": 119, "text": "In 1995, the minister of education, Sukavich Rangsitpol, launched a series of education reforms in 1995 with the intention of the education reform is to realize the potential of Thai people to develop themselves for a better quality of life and to develop the nation for a peaceful co-existence in the global community.", "title": "Internationally" }, { "paragraph_id": 120, "text": "Since December 1995, activities have been conducted in four main areas:", "title": "Internationally" }, { "paragraph_id": 121, "text": "· School reform. Efforts have been stepped up to standardize the quality of education in all levels and types of schools and educational institutions. Educational coverage has been expanded.", "title": "Internationally" }, { "paragraph_id": 122, "text": "· Teacher reform. Training and recruitment of teachers have been reformed urgently and comprehensively both in public and private schools. Educational administrators and personnel have been developed continuously.", "title": "Internationally" }, { "paragraph_id": 123, "text": "Curriculum reform. Curriculum and teaching-learning processes have been reformed on an urgent basis in order to raise educational quality of all types and levels.", "title": "Internationally" }, { "paragraph_id": 124, "text": "· Administrative reform. Through devolution, educational institutions have been empowered to make administrative decisions and to offer appropriate educational services which are as consistent as possible with the local lifestyle and conditions. Provincial organizations have been strengthened to facilitate devolution while private participation of the family and community have been promoted and supported.", "title": "Internationally" }, { "paragraph_id": 125, "text": "", "title": "Internationally" }, { "paragraph_id": 126, "text": "School-based management (SBM) in Thailand implemented in 1997 in the course of a reform aimed at overcoming a profound crisis in the educación system.", "title": "Internationally" }, { "paragraph_id": 127, "text": "Establish effective Provincial Education Councils with strong community membership. The purpose of decentralization is to ensure that local education needs are met, there should be a close relationship between community representatives and officials . Thus, decentralization will require a careful balance between the guidance of community selected representatives and government officials. To representing local needs and priorities.", "title": "Internationally" }, { "paragraph_id": 128, "text": "", "title": "Internationally" }, { "paragraph_id": 129, "text": "The 1995 Education Reform results in 40,000 schools under the Education Reform in 1997. Project were required to improve their school environment and encourage the local community to be involved in school administration and management.", "title": "Internationally" }, { "paragraph_id": 130, "text": "Those schools could later accepted 4.35 students aged between 3-17years old from poor families in remote areas .Thereafter Thailand was successfully established Education For All (EFA). Thus, Thailand received 1997 ACEID awards for excellence in education from UNESCO in 1997", "title": "Internationally" }, { "paragraph_id": 131, "text": "According to UNESCO, Thailand education reform has led to the following results:", "title": "Internationally" }, { "paragraph_id": 132, "text": "World Bank report that after the 1997 Asian financial crisis Income in the northeast, the poorest part of Thailand, has risen by 46 percent from 1998 to 2006. Nationwide poverty fell from 21.3 to 11.3 percent.", "title": "Internationally" }, { "paragraph_id": 133, "text": "The learning crisis is the reality that while the majority of children around the world attend school, a large proportion of them are not learning. A World Bank study found that \"53 percent of children in low- and middle-income countries cannot read and understand a simple story by the end of primary school.\" While schooling has increased rapidly over the last few decades, learning has not followed suit. Many practitioners and academics call for education system reform in order to address the learning needs of all children.", "title": "Internationally" }, { "paragraph_id": 134, "text": "The movement to use computers more in education naturally includes many unrelated ideas, methods, and pedagogies since there are many uses for digital computers. For example, the fact that computers are naturally good at math leads to the question of the use of calculators in math education. The Internet's communication capabilities make it potentially useful for collaboration, and foreign language learning. The computer's ability to simulate physical systems makes it potentially useful in teaching science. More often, however, debate of digital education reform centers around more general applications of computers to education, such as electronic test-taking and online classes.", "title": "Digital education" }, { "paragraph_id": 135, "text": "Another viable addition to digital education has been blended learning. In 2009, over 3 million K-12 students took an online course, compared to 2000 when 45,000 took an online course. Blended learning examples include pure online, blended, and traditional education. Research results show that the most effective learning takes place in a blended format. This allows children to view the lecture ahead of time and then spend class time practicing, refining, and applying what they have previously learned.", "title": "Digital education" }, { "paragraph_id": 136, "text": "The idea of creating artificial intelligence led some computer scientists to believe that teachers could be replaced by computers, through something like an expert system; however, attempts to accomplish this have predictably proved inflexible. The computer is now more understood to be a tool or assistant for the teacher and students.", "title": "Digital education" }, { "paragraph_id": 137, "text": "Harnessing the richness of the Internet is another goal. In some cases classrooms have been moved entirely online, while in other instances the goal is more to learn how the Internet can be more than a classroom.", "title": "Digital education" }, { "paragraph_id": 138, "text": "Web-based international educational software is under development by students at New York University, based on the belief that current educational institutions are too rigid: effective teaching is not routine, students are not passive, and questions of practice are not predictable or standardized. The software allows for courses tailored to an individual's abilities through frequent and automatic multiple intelligences assessments. Ultimate goals include assisting students to be intrinsically motivated to educate themselves, and aiding the student in self-actualization. Courses typically taught only in college are being reformatted so that they can be taught to any level of student, whereby elementary school students may learn the foundations of any topic they desire. Such a program has the potential to remove the bureaucratic inefficiencies of education in modern countries, and with the decreasing digital divide, help developing nations rapidly achieve a similar quality of education. With an open format similar to Wikipedia, any teacher may upload their courses online and a feedback system will help students choose relevant courses of the highest quality. Teachers can provide links in their digital courses to webcast videos of their lectures. Students will have personal academic profiles and a forum will allow students to pose complex questions, while simpler questions will be automatically answered by the software, which will bring you to a solution by searching through the knowledge database, which includes all available courses and topics.", "title": "Digital education" }, { "paragraph_id": 139, "text": "The 21st century ushered in the acceptance and encouragement of internet research conducted on college and university campuses, in homes, and even in gathering areas of shopping centers. Addition of cyber cafes on campuses and coffee shops, loaning of communication devices from libraries, and availability of more portable technology devices, opened up a world of educational resources. Availability of knowledge to the elite had always been obvious, yet provision of networking devices, even wireless gadget sign-outs from libraries, made availability of information an expectation of most persons. Cassandra B. Whyte researched the future of computer use on higher education campuses focusing on student affairs. Though at first seen as a data collection and outcome reporting tool, the use of computer technology in the classrooms, meeting areas, and homes continued to unfold. The sole dependence on paper resources for subject information diminished and e-books and articles, as well as online courses, were anticipated to become increasingly staple and affordable choices provided by higher education institutions according to Whyte in a 2002 presentation.", "title": "Digital education" }, { "paragraph_id": 140, "text": "Digitally \"flipping\" classrooms is a trend in digital education that has gained significant momentum. Will Richardson, author and visionary for the digital education realm, points to the not-so-distant future and the seemingly infinite possibilities for digital communication linked to improved education. Education on the whole, as a stand-alone entity, has been slow to embrace these changes. The use of web tools such as wikis, blogs, and social networking sites is tied to increasing overall effectiveness of digital education in schools. Examples exist of teacher and student success stories where learning has transcended the classroom and has reached far out into society.", "title": "Digital education" }, { "paragraph_id": 141, "text": "The media has been instrumental in pushing formal educational institutions to become savvier in their methods. Additionally, advertising has been (and continues to be) a vital force in shaping students and parents thought patterns.", "title": "Digital education" }, { "paragraph_id": 142, "text": "Technology is a dynamic entity that is constantly in flux. As time presses on, new technologies will continue to break paradigms that will reshape human thinking regarding technological innovation. This concept stresses a certain disconnect between teachers and learners and the growing chasm that started some time ago. Richardson asserts that traditional classroom's will essentially enter entropy unless teachers increase their comfort and proficiency with technology.", "title": "Digital education" }, { "paragraph_id": 143, "text": "Administrators are not exempt from the technological disconnect. They must recognize the existence of a younger generation of teachers who were born during the Digital Age and are very comfortable with technology. However, when old meets new, especially in a mentoring situation, conflict seems inevitable. Ironically, the answer to the outdated mentor may be digital collaboration with worldwide mentor webs; composed of individuals with creative ideas for the classroom.", "title": "Digital education" }, { "paragraph_id": 144, "text": "This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0 (license statement/permission). Text taken from Education Transforms Lives, 6, 8-9, UNESCO, UNESCO. UNESCO.", "title": "Sources" } ]
Education reform is the name given to the goal of changing public education. The meaning and education methods have changed through debates over what content or experiences result in an educated individual or an educated society. Historically, the motivations for reform have not reflected the current needs of society. A consistent theme of reform includes the idea that large systematic changes to educational standards will produce social returns in citizens' health, wealth, and well-being. As part of the broader social and political processes, the term education reform refers to the chronology of significant, systematic revisions made to amend the educational legislation, standards, methodology, and policy affecting a nation's public school system to reflect the needs and values of contemporary society. 18th century, classical education instruction from an in-home personal tutor, hired at the family's expense, was primarily a privilege for children from wealthy families. Innovations such as encyclopedias, public libraries, and grammar schools all aimed to relieve some of the financial burden associated with the expenses of the classical education model. Motivations during the Victorian era emphasized the importance of self-improvement. Victorian education focused on teaching commercially valuable topics, such as modern languages and mathematics, rather than classical liberal arts subjects, such as Latin, art, and history. Motivations for education reformists like Horace Mann and his proponents focused on making schooling more accessible and developing a robust state-supported common school system. John Dewey, an early 20th-century reformer, focused on improving society by advocating for a scientific, pragmatic, or democratic principle-based curriculum. Whereas Maria Montessori incorporated humanistic motivations to "meet the needs of the child". In historic Prussia, a motivation to foster national unity led to formal education concentrated on teaching national language literacy to young children, resulting in Kindergarten. The history of educational pedagogy in the United States has ranged from teaching literacy and proficiency of religious doctrine to establishing cultural literacy, assimilating immigrants into a democratic society, producing a skilled labor force for the industrialized workplace, preparing students for careers, and competing in a global marketplace. Education inequality is also a motivation for education reform, seeking to address problems of a community.
2001-10-06T01:13:29Z
2023-11-22T05:00:03Z
[ "Template:Div col", "Template:Cite news", "Template:Authority control", "Template:Main", "Template:Cite journal", "Template:Education in USA", "Template:ISBN", "Template:Citation", "Template:Citation needed", "Template:See also", "Template:Portal", "Template:Div col end", "Template:Reflist", "Template:Cite web", "Template:Cite book", "Template:Short description", "Template:Cite court", "Template:Free-content attribution", "Template:'", "Template:Standards-based Education Reform", "Template:Webarchive" ]
https://en.wikipedia.org/wiki/Education_reform
9,621
Ellensburg, Washington
Ellensburg is a city in and the county seat of Kittitas County, Washington, United States. It is located just east of the Cascade Range near the junction of Interstate 90 and Interstate 82. The population was 18,666 at the 2020 census. and was estimated to be 19,596 in 2021. The city is located along the Yakima River in the Kittitas Valley, an agricultural region that extends east towards the Columbia River. The valley is a major producer of timothy hay, which is processed and shipped internationally. Ellensburg is also the home of Central Washington University (CWU). Ellensburg, originally named Ellensburgh for the wife of town founder John Alden Shoudy, was founded in 1871 and grew rapidly in the 1880s following the arrival of the Northern Pacific Railway. The city was once a leading candidate to become the state capital of Washington, but its campaign was scuppered by a major fire in 1889. John Alden Shoudy arrived in the Kittitas Valley in 1871 and purchased a small trading post from Andrew Jackson "A.J." Splawn, called "Robber's Roost". Robber's Roost was the first business in the valley, aside from the early trading that occurred among Native Americans, cattle drivers, trappers, and miners. A small stone monument to Robber's Roost with a placard can be found at its original location, present-day 3rd Avenue, just west of Main Street near the alley. Shoudy named the new town after his wife, Mary Ellen, thus officially starting the city of Ellensburgh around 1872. Shoudy had not been the first settler nor the first business person in the Kittitas Valley, but he was responsible for platting the city of Ellensburgh in the 1870s and also named the streets in the downtown district. Ellensburgh was officially incorporated on November 26, 1883. In 1894, the final -h was dropped under standardization pressure from the United States Postal Service and Board of Geography Names. Ellensburg was an early center of commerce in Washington and was among the first cities in the state to have electrical service. The city launched a bid to become Washington state's capital in 1889, preparing a site in the Capital Hill neighborhood for government offices. On July 4 that year, however, a major fire destroyed much of the downtown area and stalled the campaign, which resumed with a series of referendums, in which Washington voters chose Olympia. The state legislature selected Ellensburg as the location for the State Normal School (now Central Washington University). There were several early newspapers in Ellensburg. The Daily Record, which started in 1909, is the publication which serves the city and county today. Concerns over the state of Ellensburg's historic downtown led to the formation of the Ellensburg Downtown Association to work on revitalizing the area. The City of Ellensburg has several local art museums and galleries: According to the United States Census Bureau, the city has a total area of 6.97 square miles (18.05 km), of which 6.92 square miles (17.92 km) is land and 0.05 square miles (0.13 km) is water. Owing to the strong Cascade rain shadow, Ellensburg experiences a typical Intermountain cool semi-arid climate (Köppen BSk). The hottest temperature recorded in Ellensburg was 110 °F (43.3 °C) on July 26, 1928, while the coldest temperature recorded was −31 °F (−35.0 °C) on December 12, 1919. As of the census of 2010, there were 18,174 people, 7,301 households, and 2,889 families living in the city. The population density was 2,626.3 inhabitants per square mile (1,014.0/km). There were 7,867 housing units at an average density of 1,136.8 per square mile (438.9/km). The racial makeup of the city was 85.7% White, 1.5% African American, 1.0% Native American, 3.2% Asian, 0.2% Pacific Islander, 4.6% from other races, and 3.7% from two or more races. Hispanic or Latino of any race were 9.7% of the population. There were 7,301 households, of which 19.3% had children under the age of 18 living with them, 28.2% were married couples living together, 8.2% had a female householder with no husband present, 3.1% had a male householder with no wife present, and 60.4% were non-families. 35.1% of all households were made up of individuals, and 9.6% had someone living alone who was 65 years of age or older. The average household size was 2.16 and the average family size was 2.86. The median age in the city was 23.5 years. 14.2% of residents were under the age of 18; 41.2% were between the ages of 18 and 24; 21.8% were from 25 to 44; 13.9% were from 45 to 64; and 8.9% were 65 years of age or older. The gender makeup of the city was 50.1% male and 49.9% female. As of the census of 2000, there were 15,414 people, 6,249 households, and 2,649 families living in the city. The population density was 2,338.9 people per square mile (903.1 people/km). There were 6,732 housing units at an average density of 1,021.5 per square mile (394.4/km). The racial makeup of the city was 88.07% White, 1.17% Black or African American, 0.95% Native American, 4.09% Asian, 0.16% Pacific Islander, 2.86% from other races, and 2.69% from two or more races. 6.33% of the population were Hispanic or Latino of any race. There were 6,249 households, of which 20.8% had children under the age of 18 living with them, 31.4% were married couples living together, 8.1% had a female householder with no husband present, and 57.6% were non-families. 35.5% of all households were made up of individuals, and 9.1% had someone living alone who was 65 years of age or older. The average household size was 2.12 and the average family size was 2.84. In the city, the population was spread out, with 15.8% under the age of 18, 39.3% from 18 to 24, 22.7% from 25 to 44, 12.8% from 45 to 64, and 9.4% who were 65 years of age or older. The median age was 24 years. For every 100 females, there were 95.0 males. For every 100 females age 18 and over, there were 93.1 males. The median income for a household in the city was $20,034, and the median income for a family was $37,625. Males had a median income of $31,022 versus $22,829 for females. The per capita income for the city was $13,662. About 18.8% of families and 34.3% of the population were below the poverty line, including 29.0% of those under age 18 and 11.2% of those age 65 or over. The City of Ellensburg uses the Manager/Council form of government with a City Manager hired by the City Council. The seven-member City Council is elected at large and serve 4-year terms. The City Council elects a Mayor and Deputy Mayor from the council to serve 2-year terms. On the state legislative level, Ellensburg is in the 13th district. As of May, 2018, its state senator is Republican Judy Warnick, and its two state representatives are Republicans Alex Ybarra and Tom Dent. On the congressional level, Ellensburg is located in Washington's 8th congressional district and is represented by Democrat Kim Schrier. Kittitas County is served by the Daily Record, a newspaper published in Ellensburg five days a week. The city maintains its own public library, which opened on January 20, 1910, using funds donated by Andrew Carnegie. Public schools are operated by Ellensburg School District 401. The district includes one high school (Ellensburg High School), one middle school, and four elementary schools.
[ { "paragraph_id": 0, "text": "Ellensburg is a city in and the county seat of Kittitas County, Washington, United States. It is located just east of the Cascade Range near the junction of Interstate 90 and Interstate 82. The population was 18,666 at the 2020 census. and was estimated to be 19,596 in 2021.", "title": "" }, { "paragraph_id": 1, "text": "The city is located along the Yakima River in the Kittitas Valley, an agricultural region that extends east towards the Columbia River. The valley is a major producer of timothy hay, which is processed and shipped internationally. Ellensburg is also the home of Central Washington University (CWU).", "title": "" }, { "paragraph_id": 2, "text": "Ellensburg, originally named Ellensburgh for the wife of town founder John Alden Shoudy, was founded in 1871 and grew rapidly in the 1880s following the arrival of the Northern Pacific Railway. The city was once a leading candidate to become the state capital of Washington, but its campaign was scuppered by a major fire in 1889.", "title": "" }, { "paragraph_id": 3, "text": "John Alden Shoudy arrived in the Kittitas Valley in 1871 and purchased a small trading post from Andrew Jackson \"A.J.\" Splawn, called \"Robber's Roost\". Robber's Roost was the first business in the valley, aside from the early trading that occurred among Native Americans, cattle drivers, trappers, and miners. A small stone monument to Robber's Roost with a placard can be found at its original location, present-day 3rd Avenue, just west of Main Street near the alley.", "title": "History" }, { "paragraph_id": 4, "text": "Shoudy named the new town after his wife, Mary Ellen, thus officially starting the city of Ellensburgh around 1872. Shoudy had not been the first settler nor the first business person in the Kittitas Valley, but he was responsible for platting the city of Ellensburgh in the 1870s and also named the streets in the downtown district. Ellensburgh was officially incorporated on November 26, 1883. In 1894, the final -h was dropped under standardization pressure from the United States Postal Service and Board of Geography Names. Ellensburg was an early center of commerce in Washington and was among the first cities in the state to have electrical service.", "title": "History" }, { "paragraph_id": 5, "text": "The city launched a bid to become Washington state's capital in 1889, preparing a site in the Capital Hill neighborhood for government offices. On July 4 that year, however, a major fire destroyed much of the downtown area and stalled the campaign, which resumed with a series of referendums, in which Washington voters chose Olympia. The state legislature selected Ellensburg as the location for the State Normal School (now Central Washington University).", "title": "History" }, { "paragraph_id": 6, "text": "There were several early newspapers in Ellensburg. The Daily Record, which started in 1909, is the publication which serves the city and county today. Concerns over the state of Ellensburg's historic downtown led to the formation of the Ellensburg Downtown Association to work on revitalizing the area.", "title": "History" }, { "paragraph_id": 7, "text": "The City of Ellensburg has several local art museums and galleries:", "title": "Arts and culture" }, { "paragraph_id": 8, "text": "According to the United States Census Bureau, the city has a total area of 6.97 square miles (18.05 km), of which 6.92 square miles (17.92 km) is land and 0.05 square miles (0.13 km) is water.", "title": "Geography" }, { "paragraph_id": 9, "text": "Owing to the strong Cascade rain shadow, Ellensburg experiences a typical Intermountain cool semi-arid climate (Köppen BSk). The hottest temperature recorded in Ellensburg was 110 °F (43.3 °C) on July 26, 1928, while the coldest temperature recorded was −31 °F (−35.0 °C) on December 12, 1919.", "title": "Geography" }, { "paragraph_id": 10, "text": "As of the census of 2010, there were 18,174 people, 7,301 households, and 2,889 families living in the city. The population density was 2,626.3 inhabitants per square mile (1,014.0/km). There were 7,867 housing units at an average density of 1,136.8 per square mile (438.9/km). The racial makeup of the city was 85.7% White, 1.5% African American, 1.0% Native American, 3.2% Asian, 0.2% Pacific Islander, 4.6% from other races, and 3.7% from two or more races. Hispanic or Latino of any race were 9.7% of the population.", "title": "Demographics" }, { "paragraph_id": 11, "text": "There were 7,301 households, of which 19.3% had children under the age of 18 living with them, 28.2% were married couples living together, 8.2% had a female householder with no husband present, 3.1% had a male householder with no wife present, and 60.4% were non-families. 35.1% of all households were made up of individuals, and 9.6% had someone living alone who was 65 years of age or older. The average household size was 2.16 and the average family size was 2.86.", "title": "Demographics" }, { "paragraph_id": 12, "text": "The median age in the city was 23.5 years. 14.2% of residents were under the age of 18; 41.2% were between the ages of 18 and 24; 21.8% were from 25 to 44; 13.9% were from 45 to 64; and 8.9% were 65 years of age or older. The gender makeup of the city was 50.1% male and 49.9% female.", "title": "Demographics" }, { "paragraph_id": 13, "text": "As of the census of 2000, there were 15,414 people, 6,249 households, and 2,649 families living in the city. The population density was 2,338.9 people per square mile (903.1 people/km). There were 6,732 housing units at an average density of 1,021.5 per square mile (394.4/km). The racial makeup of the city was 88.07% White, 1.17% Black or African American, 0.95% Native American, 4.09% Asian, 0.16% Pacific Islander, 2.86% from other races, and 2.69% from two or more races. 6.33% of the population were Hispanic or Latino of any race.", "title": "Demographics" }, { "paragraph_id": 14, "text": "There were 6,249 households, of which 20.8% had children under the age of 18 living with them, 31.4% were married couples living together, 8.1% had a female householder with no husband present, and 57.6% were non-families. 35.5% of all households were made up of individuals, and 9.1% had someone living alone who was 65 years of age or older. The average household size was 2.12 and the average family size was 2.84.", "title": "Demographics" }, { "paragraph_id": 15, "text": "In the city, the population was spread out, with 15.8% under the age of 18, 39.3% from 18 to 24, 22.7% from 25 to 44, 12.8% from 45 to 64, and 9.4% who were 65 years of age or older. The median age was 24 years. For every 100 females, there were 95.0 males. For every 100 females age 18 and over, there were 93.1 males.", "title": "Demographics" }, { "paragraph_id": 16, "text": "The median income for a household in the city was $20,034, and the median income for a family was $37,625. Males had a median income of $31,022 versus $22,829 for females. The per capita income for the city was $13,662. About 18.8% of families and 34.3% of the population were below the poverty line, including 29.0% of those under age 18 and 11.2% of those age 65 or over.", "title": "Demographics" }, { "paragraph_id": 17, "text": "The City of Ellensburg uses the Manager/Council form of government with a City Manager hired by the City Council. The seven-member City Council is elected at large and serve 4-year terms. The City Council elects a Mayor and Deputy Mayor from the council to serve 2-year terms.", "title": "Politics and government" }, { "paragraph_id": 18, "text": "On the state legislative level, Ellensburg is in the 13th district. As of May, 2018, its state senator is Republican Judy Warnick, and its two state representatives are Republicans Alex Ybarra and Tom Dent. On the congressional level, Ellensburg is located in Washington's 8th congressional district and is represented by Democrat Kim Schrier.", "title": "Politics and government" }, { "paragraph_id": 19, "text": "Kittitas County is served by the Daily Record, a newspaper published in Ellensburg five days a week.", "title": "Media" }, { "paragraph_id": 20, "text": "The city maintains its own public library, which opened on January 20, 1910, using funds donated by Andrew Carnegie.", "title": "Media" }, { "paragraph_id": 21, "text": "Public schools are operated by Ellensburg School District 401. The district includes one high school (Ellensburg High School), one middle school, and four elementary schools.", "title": "Education" } ]
Ellensburg is a city in and the county seat of Kittitas County, Washington, United States. It is located just east of the Cascade Range near the junction of Interstate 90 and Interstate 82. The population was 18,666 at the 2020 census. and was estimated to be 19,596 in 2021. The city is located along the Yakima River in the Kittitas Valley, an agricultural region that extends east towards the Columbia River. The valley is a major producer of timothy hay, which is processed and shipped internationally. Ellensburg is also the home of Central Washington University (CWU). Ellensburg, originally named Ellensburgh for the wife of town founder John Alden Shoudy, was founded in 1871 and grew rapidly in the 1880s following the arrival of the Northern Pacific Railway. The city was once a leading candidate to become the state capital of Washington, but its campaign was scuppered by a major fire in 1889.
2001-07-29T05:01:46Z
2023-12-03T20:26:49Z
[ "Template:Reflist", "Template:Kittitas County, Washington", "Template:Short description", "Template:US Census population", "Template:Wikivoyage", "Template:Washington (state) county seats", "Template:Authority control", "Template:Use mdy dates", "Template:Cn", "Template:ISBN", "Template:Infobox settlement", "Template:Cite web", "Template:Cite news", "Template:Commons category", "Template:Curlie", "Template:Washington", "Template:Convert", "Template:Weather box" ]
https://en.wikipedia.org/wiki/Ellensburg,_Washington
9,623
Eugene, Oregon
Eugene (/juːˈdʒiːn/ yoo-JEEN) is a city in and the county seat of Lane County, Oregon, United States. It is located at the southern end of the Willamette Valley, near the confluence of the McKenzie and Willamette rivers, about 50 miles (80 km) east of the Oregon Coast. The second-most populous city in Oregon, Eugene had a population of 176,654 as of the 2020 United States census and it covers city area of 44.21 sq mi (114.5 km). The Eugene-Springfield metropolitan statistical area is the second largest in Oregon behind Portland. In 2022, Eugene's population was estimated to have reached 179,887. Eugene is home to the University of Oregon, Bushnell University, and Lane Community College. The city is noted for its natural environment, recreational opportunities (especially bicycling, running/jogging, rafting, and kayaking), and focus on the arts, along with its history of civil unrest, protests, and green activism. Eugene's official slogan is "A Great City for the Arts and Outdoors". It is also referred to as the "Emerald City" and as "Track Town, USA". The Nike corporation had its beginnings in Eugene. In July 2022, the city hosted the 18th World Athletics Championship. The first people to settle in the Eugene area were the Kalapuyans, also written Calapooia or Calapooya. They made "seasonal rounds," moving around the countryside to collect and preserve local foods, including acorns, the bulbs of the wapato and camas plants, and berries. They stored these foods in their permanent winter village. When crop activities waned, they returned to their winter villages and took up hunting, fishing, and trading. They were known as the Chifin Kalapuyans and called the Eugene area where they lived "Chifin", sometimes recorded as "Chafin" or "Chiffin". Other Kalapuyan tribes occupied villages that are also now within Eugene city limits. Pee-you or Mohawk Calapooians, Winefelly or Pleasant Hill Calapooians, and the Lungtum or Long Tom. They were close-neighbors to the Chifin, intermarried, and were political allies. Some authorities suggest the Brownsville Kalapuyans (Calapooia Kalapuyans) were related to the Pee-you. It is likely that since the Santiam had an alliance with the Brownsville Kalapuyans that the Santiam influence also went as far at Eugene. According to archeological evidence, the ancestors of the Kalapuyans may have been in Eugene for as long as 10,000 years. In the 1800s their traditional way of life faced significant changes due to devastating epidemics and settlement, first by French fur traders and later by an overwhelming number of American settlers. French fur traders had settled seasonally in the Willamette Valley by the beginning of the 19th century. Their settlements were concentrated in the "French Prairie" community in Northern Marion County but may have extended south to the Eugene area. Having already developed relationships with Native communities through intermarriage and trade, they negotiated for land from the Kalapuyans. By 1828 to 1830 they and their Native wives began year-round occupation of the land, raising crops and tending animals. In this process, the mixed race families began to impact Native access to land, food supply, and traditional materials for trade and religious practices. In July 1830, "intermittent fever" struck the lower Columbia region and a year later, the Willamette Valley. Natives traced the arrival of the disease, then new to the Pacific Northwest, to the USS Owyhee, captained by John Dominis. "Intermittent fever" is thought by researchers now to be malaria. According to Robert T. Boyd, an anthropologist at Portland State University, the first three years of the epidemic, "probably constitute the single most important epidemiological event in the recorded history of what would eventually become the state of Oregon". In his book The Coming of the Spirit Pestilence Boyd reports there was a 92% population loss for the Kalapuyans between 1830 and 1841. This catastrophic event shattered the social fabric of Kalapuyan society and altered the demographic balance in the Valley. This balance was further altered over the next few years by the arrival of Anglo-American settlers, beginning in 1840 with 13 people and growing steadily each year until within 20 years more than 11,000 American settlers, including Eugene Skinner, had arrived. As the demographic pressure from the settlers grew, the remaining Kalapuyans were forcibly removed to Indian reservations. Though some Natives avoided transfer into the reservation, most were moved to the Grand Ronde reservation in 1856. Strict racial segregation was enforced and mixed race people, known as Métis in French, had to make a choice between the reservation and Anglo-American society. Native Americans could not leave the reservation without traveling papers and white people could not enter the reservation. Eugene Franklin Skinner, after whom Eugene is named, arrived in the Willamette Valley in 1846 with 1,200 other settlers that year. Advised by the Kalapuyans to build on high ground to avoid flooding, he erected the first pioneer cabin on south or west slope of what the Kalapuyans called Ya-po-ah. The "isolated hill" is now known as Skinner's Butte. The cabin was used as a trading post and was registered as an official post office on January 8, 1850. At this time the settlement was known by settlers as Skinner's Mudhole. It was relocated in 1853 and named Eugene City in 1853. Formally incorporated as a city in 1862, it was named simply Eugene in 1889. Skinner ran a ferry service across the Willamette River where the Ferry Street Bridge now stands. The first major educational institution in the area was Columbia College, founded a few years earlier than the University of Oregon. It fell victim to two major fires in four years, and after the second fire, the college decided not to rebuild again. The part of south Eugene known as College Hill was the former location of Columbia College. There is no college there today. The town raised the initial funding to start a public university, which later became the University of Oregon, with the hope of turning the small town into a center of learning. In 1872, the Legislative Assembly passed a bill creating the University of Oregon as a state institution. Eugene bested the nearby town of Albany in the competition for the state university. In 1873, community member J.H.D. Henderson donated the hilltop land for the campus, overlooking the city. The university first opened in 1876 with the regents electing the first faculty and naming John Wesley Johnson as president. The first students registered on October 16, 1876. The first building was completed in 1877; it was named Deady Hall in honor of the first Board of Regents President and community leader Judge Matthew P. Deady. Other universities in Eugene include Bushnell University and New Hope Christian College. Eugene grew rapidly throughout most of the twentieth century, with the exception being the early 1980s when a downturn in the timber industry caused high unemployment. By 1985, the industry had recovered and Eugene began to attract more high-tech industries, earning it the moniker the "Emerald Shire". In 2012, Eugene and the surrounding metro area was dubbed the Silicon shire. The first Nike shoe was used in 1972 during the US Olympic trials held in Eugene. The 1970s saw an increase in community activism. Local activists stopped a proposed freeway and lobbied for the construction of the Washington Jefferson Park beneath the Washington-Jefferson Street Bridge. Community Councils soon began to form as a result of these efforts. A notable impact of the turn to community-organized politics came with Eugene Local Measure 51, a ballot measure in 1978 that repealed a gay rights ordinance approved by the Eugene City Council in 1977 that prohibited discrimination by sexual orientation. Eugene is also home to Beyond Toxics, a nonprofit environmental justice organization founded in 2000. One hotspot for protest activity since the 1990s has been the Whitaker district, located in the northwest of downtown Eugene. Whitaker is primarily a working-class neighborhood that has become a cultural hub, center of community and activism and home to alternative artists. It saw an increase of activity in the 1990s after many young people drawn to Eugene's political climate relocated there. Animal rights groups have had a heavy presence in the Whiteaker, and several vegan restaurants are located there. According to David Samuels, the Animal Liberation Front and the Earth Liberation Front have had an underground presence in the neighborhood. The neighborhood is home to a number of communal apartment buildings, which are often organized by anarchist or environmentalist groups. Local activists have also produced independent films and started art galleries, community gardens, and independent media outlets. Copwatch, Food Not Bombs, and Critical Mass are also active in the neighborhood. According to the United States Census Bureau, the city has a total area of 43.74 square miles (113.29 km), of which 43.72 square miles (113.23 km) is land and 0.02 square miles (0.05 km) is water. Eugene is at an elevation of 426 feet (130 m). To the north of downtown is Skinner Butte. Northeast of the city are the Coburg Hills. Spencer Butte is a prominent landmark south of the city. Mount Pisgah is southeast of Eugene and includes the Mount Pisgah Arboretum and the Howard Buford Recreation Area, a Lane County Park. Eugene is surrounded by foothills and forests to the south, east, and west, while to the north the land levels out into the Willamette Valley and consists of mostly farmland. The Willamette and McKenzie Rivers run through Eugene and its neighboring city, Springfield. Another important stream is Amazon Creek, whose headwaters are near Spencer Butte. The creek discharges into the Long Tom River north Fern Ridge Reservoir, maintained for winter flood control by the Army Corps of Engineers. The Eugene Yacht Club hosts a sailing school and sailing regattas at Fern Ridge during summer months. Eugene has 23 neighborhood associations: The River Road and Santa Clara sections, which make up the northwestern part of the city, are within the urban growth boundary and generally perceived as part of Eugene, but are largely outside of the city limits. Like the rest of the Willamette Valley, Eugene lies in the Marine West Coast climate zone, with Mediterranean characteristics. Under the Köppen climate classification scheme, Eugene has a warm-summer Mediterranean climate (Köppen: Csb). Temperatures can vary from cool to warm, with warm, dry summers and cool, wet winters. Spring and fall are also moist seasons, with light rain falling for long periods. The average rainfall is 40.83 inches (1,040 mm), with the wettest "rain year" being from July 1973 to June 1974 with 75.59 inches (1,920.0 mm) and the driest from July 2000 to June 2001 with 20.40 inches (518.2 mm). Measurements taken by NOAA over the past four decades have indicated a significant decline in average annual precipitation. From 1981 to 2010 inclusive, the reported annual average precipitation was 46.1 inches (1,170 mm), but for the thirty-year period ending in 2020, the annual average had declined 5.27 inches (134 mm), to 40.83 inches (1,040 mm). The figures from the second half of that period, or 2006 - 2020 inclusive, pointed to a further decline of more than 4 inches (102 mm), down to an annual average of 36.58 inches (929 mm). Winter snowfall does occur, but it is sporadic and rarely accumulates in large amounts: the normal seasonal amount is 4.9 inches (12 cm), but the median is zero. The record snowfall was 41.7 inches (106 cm) of accumulation due to a pineapple express on January 25–29, 1969. Ice storms, like snowfall, are rare, but occur sporadically. The hottest months are July and August, with a normal monthly mean temperature of 67.8 to 67.9 °F (19.9 to 19.9 °C), with an average of 16 days per year reaching 90 °F (32 °C). The coolest month is December, with a mean temperature of 40.6 °F (4.8 °C), and there are 52 mornings per year with a low at or below freezing, and 2 afternoons with highs not exceeding the freezing mark. The coldest daytime high of the year averages 32 °F (0 °C), reaching the freezing point. Eugene's average annual temperature is 53.1 °F (11.7 °C), and annual precipitation at 40.83 inches (1,040 mm). Eugene is slightly cooler on average than Portland. Despite being located about 100 miles (160 km) south and at an only slightly higher elevation, Eugene has a more continental climate than Portland, less subject to the maritime air that blows inland from the Pacific Ocean via the Columbia River. Eugene's normal annual mean minimum is 41.9 °F (5.5 °C), compared to 46.2 °F (7.9 °C) in Portland; in August, the gap in the normal mean minimum widens to 51.1 and 58.0 °F (10.6 and 14.4 °C) for Eugene and Portland, respectively. Eugene's warmest night annually averages a modest 62 °F (17 °C). Average winter temperatures (and summer high temperatures) are similar for the two cities. Extreme temperatures range from −12 °F (−24 °C), recorded on December 8, 1972, to 111 °F (44 °C) on June 27, 2021; the record cold daily maximum is 19 °F (−7 °C), recorded on December 13, 1919, while, conversely, the record warm daily minimum is 71 °F (22 °C) on July 22, 2006. Eugene is downwind of Willamette Valley grass seed farms. The combination of summer grass pollen and the confining shape of the hills around Eugene make it "the area of the highest grass pollen counts in the USA (>1,500 pollen grains/m of air)." These high pollen counts have led to difficulties for some track athletes who compete in Eugene. In the Olympic trials in 1972, "Jim Ryun won the 1,500 after being flown in by helicopter because he was allergic to Eugene's grass seed pollen." Further, six-time Olympian Maria Mutola abandoned Eugene as a training area "in part to avoid allergies". According to the 2010 census, Eugene's population was 156,185. The population density was 3,572.2 people per square mile. There were 69,951 housing units at an average density of 1,600 per square mile. Those age 18 and over accounted for 81.8% of the total population. The racial makeup of the city was 85.8% White, 4.0% Asian, 1.4% Black or African American, 1.0% Native American, 0.2% Pacific Islander, and 4.7% from other races. Hispanics and Latinos of any race accounted for 7.8% of the total population. Of the non-Hispanics, 82% were White, 1.3% Black or African American, 0.8% Native American, 4% Asian, 0.2% Pacific Islander, 0.2% some other race alone, and 3.4% were of two or more races. Females represented 51.1% of the total population, and males represented 48.9%. The median age in the city was 33.8 years. The census of 2000 showed there were 137,893 people, 58,110 households, and 31,321 families residing in the city of Eugene. The population density was 3,404.8 people per square mile (1,314.6 people/km). There were 61,444 housing units at an average density of 1,516.4 per square mile (585.5/km). The racial makeup of the city was 88.15% White, down from 99.5% in 1950, 3.57% Asian, 1.25% Black or African American, 0.93% Native American, 0.21% Pacific Islander, 2.18% from other races, and 3.72% from two or more races. 4.96% of the population were Hispanic or Latino of any race. There were 58,110 households, of which 25.8% had children under the age of 18 living with them, 40.6% were married couples living together, 9.7% had a female householder with no husband present, and 46.1% were non-families. 31.7% of all households were made up of individuals, and 9.4% had someone living alone who was 65 years of age or older. The average household size was 2.27 and the average family size was 2.87. In the city, the population was 20.3% under the age of 18, 17.3% from 18 to 24, 28.5% from 25 to 44, 21.8% from 45 to 64, and 12.1% who were 65 years of age or older. The median age was 33 years. For every 100 females, there were 96.0 males. For every 100 females age 18 and over, there were 94.0 males. The median income for a household in the city was $35,850, and the median income for a family was $48,527. Males had a median income of $35,549 versus $26,721 for females. The per capita income for the city was $21,315. About 8.7% of families and 17.1% of the population were below the poverty line, including 14.8% of those under age 18 and 7.1% of those age 65 or over. Eugene's largest employers are PeaceHealth Medical Group, the University of Oregon, and the Eugene School District. Eugene's largest industries are wood products manufacturing and recreational vehicle manufacturing. Corporate headquarters for the employee-owned Bi-Mart corporation and family-owned supermarket Market of Choice remain in Eugene. Many multinational businesses were launched in Eugene. Some of the most famous include Nike, Taco Time, and Brøderbund Software. The footwear repair product Shoe Goo is manufactured by Eclectic Products, based in Eugene. Run Gum, an energy gum created for runners, also began its life in Eugene. Run Gum was created by track athlete Nick Symmonds and track and field coach Sam Lapray in 2014. Burley Design LLC produces bicycle trailers and was founded in Eugene by Alan Scholz out of a Saturday Market business in 1978. Eugene is also the birthplace and home of Bike Friday bicycle manufacturer Green Gear Cycling. Organically Grown Company, the largest distributor of organic fruits and vegetables in the northwest, started in Eugene in 1978 as a non-profit co-op for organic farmers. Notable local food processors, many of whom manufacture certified organic products, include Golden Temple (Yogi Tea), Merry Hempsters, Springfield Creamery (Nancy's Yogurt), and Mountain Rose Herbs. Until July 2008, Hynix Semiconductor America had operated a large semiconductor plant in west Eugene. In late September 2009, Uni-Chem of South Korea announced its intention to purchase the Hynix site for solar cell manufacturing. However, this deal fell through and as of late 2012, is no longer planned. In 2015, semiconductor manufacturer Broadcom purchased the plant with plans to upgrade and reopen it. The company abandoned these plans and put it up for sale in November 2016. Luckey's Club Cigar Store is one of the oldest bars in Oregon. Tad Luckey Sr. purchased it in 1911, making it one of the oldest businesses in Eugene. The "Club Cigar", as it was called in the late 19th century, was for many years a men-only salon. It survived both the Great Depression and Prohibition, partly because Eugene was a "dry town" before the end of Prohibition. The city has over 25 breweries, offers a variety of dining options with a local focus; the city is surrounded by wineries. The most notable fungi here is the truffle; Eugene hosts the annual Oregon Truffle Festival in January. In 2012, the Eugene metro region was dubbed the Silicon Shire for its growing tech industry. According to Eugene's 2017 Comprehensive Annual Financial Report, the city's top employers are: Eugene has a growing problem with homelessness. The problem has been referenced in popular culture, including in the episode The 30% Iron Chef in Futurama. During the COVID-19 pandemic, the city experienced a controversy over its continuing policy of homeless removal, despite CDC guidelines to not engage in homeless removal. Eugene has a significant population of people in pursuit of alternative ideas and a large original hippie population. Beginning in the 1960s, the countercultural ideas and viewpoints espoused by area native Ken Kesey became established as the seminal elements of the vibrant social tapestry that continue to define Eugene. The Merry Prankster, as Kesey was known, has arguably left the most indelible imprint of any cultural icon in his hometown. He is best known as the author of One Flew Over the Cuckoo's Nest and as the male protagonist in Tom Wolfe's The Electric Kool-Aid Acid Test. In 2005, the city council unanimously approved a new slogan for the city: "World's Greatest City for the Arts & Outdoors". While Eugene has a vibrant arts community for a city its size, and is well situated near many outdoor opportunities, this slogan was frequently criticized by locals as embarrassing and ludicrous. In early 2010, the slogan was changed to "A Great City for the Arts & Outdoors." Eugene's Saturday Market, open every Saturday from April through November, was founded in 1970 as the first "Saturday Market" in the United States. It is adjacent to the Lane County Farmer's Market in downtown Eugene. All vendors must create or grow all their own products. The market reappears as the "Holiday Market" between Thanksgiving and New Year's in the Lane County Events Center at the fairgrounds. Eugene is noted for its "community inventiveness." Many U.S. trends in community development originated in Eugene. The University of Oregon's participatory planning process, known as The Oregon Experiment, was the result of student protests in the early 1970s. The book of the same name is a major document in modern enlightenment thinking in planning and architectural circles. The process, still used by the university in modified form, was created by Christopher Alexander, whose works also directly inspired the creation of the Wiki. Some research for the book A Pattern Language, which inspired the Design Patterns movement and Extreme Programming, was done by Alexander in Eugene. Not coincidentally, those engineering movements also had origins here. Decades after its publication, A Pattern Language is still one of the best-selling books on urban design. In the 1970s, Eugene was packed with cooperative and community projects. It still has small natural food stores in many neighborhoods, some of the oldest student cooperatives in the country, and alternative schools have been part of the school district since 1971. The old Grower's Market, downtown near the Amtrak depot, is the only food cooperative in the U.S. with no employees. It is possible to see Eugene's trend-setting non-profit tendencies in much newer projects, such as Square One Villages and the Center for Appropriate Transport. In 2006, an initiative began to create a tenant-run development process for downtown Eugene. In the fall of 2003, neighbors noticed "an unassuming two-acre remnant orchard tucked into the Friendly Area Neighborhood" had been put up for sale by its owner, a resident of New York City. Learning a prospective buyer had plans to build several houses on the property, they formed a nonprofit organization called Madison Meadow in June 2004 in order to buy the property and "preserve it as undeveloped space in perpetuity." In 2007 their effort was named Third Best Community Effort by the Eugene Weekly, and by the end of 2008 they had raised enough money to purchase the property. The City of Eugene has an active Neighborhood Program. Several neighborhoods are known for their green activism. Friendly Neighborhood has a highly popular neighborhood garden established on the right of way of a street never built. There are a number of community gardens on public property. Amazon Neighborhood has a former church turned into a community center. Whiteaker hosts a housing co-op that dates from the early 1970s that has re-purposed both their parking lots into food production and play space. An unusual eco-village with natural building techniques and large shared garden can be found in Jefferson Westside neighborhood. A several block area in the River Road Neighborhood is known as a permaculture hotspot with an increasing number of suburban homes trading grass for garden, installing rain water catchment systems, food producing landscapes and solar retrofits. Several sites have planted gardens by removing driveways. Citizen volunteers are working with the City of Eugene to restore a 65-tree filbert grove on public property. There are deepening social and economic networks in the neighborhood. Eugene museums include the University of Oregon's Jordan Schnitzer Museum of Art and Museum of Natural and Cultural History, the Oregon Air and Space Museum, Lane County History Museum, Maude Kerns Art Center, Shelton McMurphey Johnson House, and the Eugene Science Center. Eugene is home to numerous cultural organizations, including the Eugene Symphony (whose previous music directors include Marin Alsop, Giancarlo Guerrero, and Miguel Harth-Bedoya); the Eugene Ballet, a professional full-time touring company; the Eugene Opera, the Eugene Concert Choir, the Bushnell University Community Choir, the Oregon Mozart Players, the Oregon Bach Festival, the Oregon Children's Choir, the Eugene-Springfield Youth Orchestras, Ballet Fantastique and Oregon Festival of American Music. Principal performing arts venues include the Hult Center for the Performing Arts, The John G. Shedd Institute for the Arts ("The Shedd"), the McDonald Theatre, and W.O.W. Hall. The University of Oregon School of Music and Dance also attracts world class performers and teaching artists throughout the year, many of whom perform at Beall Concert Hall. The university campus also frequently hosts performances at Matthew Knight Arena and the Erb Memorial Union ballroom. A number of live theater groups are based in Eugene, including Free Shakespeare in the Park, Oregon Contemporary Theatre, The Very Little Theatre, Actors Cabaret, LCC Theatre, Rose Children's Theatre, and University Theatre. Each has its own performance venue. Because of its status as a college town, Eugene has been home to many music genres, musicians and bands, ranging from electronic dance music such as dubstep and drum and bass to garage rock, hip hop, folk and heavy metal. Eugene also has growing reggae and street-performing bluegrass and jug band scenes. Multi-genre act the Cherry Poppin' Daddies became a prominent figure in Eugene's music scene and became the house band at Eugene's W.O.W. Hall. In the late 1990s, their contributions to the swing revival movement propelled them to national stardom. Rock band Floater originated in Eugene as did the Robert Cray blues band. Doom metal band YOB is among the leaders of the Eugene heavy music scene. Eugene is home to "Classical Gas" Composer and two-time Grammy award winner Mason Williams who spent his years as a youth living between his parents in Oakridge, Oregon and Oklahoma. Mason Williams puts on a yearly Christmas show at the Hult center for performing arts with a full orchestra produced by author, audio engineer and University of Oregon professor Don Latarski. Dick Hyman, noted jazz pianist and musical director for many of Woody Allen's films, designs and hosts the annual Now Hear This! jazz festival at the Oregon Festival of American Music (OFAM). OFAM and the Hult Center routinely draw major jazz talent for concerts. Eugene is also home to a large Zimbabwean music community. Kutsinhira Cultural Arts Center, which is "dedicated to the music and people of Zimbabwe," is based in Eugene. Eugene's visual arts community is supported by over 20 private art galleries and several organizations, including Maude Kerns Art Center, Lane Arts Council, DIVA (the Downtown Initiative for the Visual Arts) and the Eugene Glass School. In 2015 installations from a group of Eugene-based artists known as Light At Play were showcased in several events around the world as part of the International Year of Light, including displays at the Smithsonian and the National Academy of Sciences. The Eugene area has been used as a filming location for several Hollywood films, most famously for 1978's National Lampoon's Animal House, which was also filmed in nearby Cottage Grove. John Belushi had the idea for the film The Blues Brothers during filming of Animal House when he happened to meet Curtis Salgado at what was then the Eugene Hotel. Getting Straight, starring Elliott Gould and Candice Bergen, was filmed at Lane Community College in 1969. As the campus was still under construction at the time, the "occupation scenes" were easier to shoot. The "Chicken Salad on Toast" scene in the 1970 Jack Nicholson movie Five Easy Pieces was filmed at the Denny's restaurant at the southern I-5 freeway interchange near Glenwood. Nicholson directed the 1971 film Drive, He Said in Eugene. How to Beat the High Co$t of Living, starring Jane Curtin, Jessica Lange and Susan St. James, was filmed in Eugene in the fall of 1979. Locations visible in the film include Valley River Center (which is a driving force in the plot), Skinner Butte and Ya-Po-Ah Terrace, the Willamette River and River Road Hardware. Several track and field movies have used Eugene as a setting and/or a filming location. Personal Best, starring Mariel Hemingway, was filmed in Eugene in 1982. The film centered on a group of women who are trying to qualify for the Olympic track and field team. Two track and field movies about the life of Steve Prefontaine, Prefontaine and Without Limits, were released within a year of each other in 1997–1998. Kenny Moore, Eugene-trained Olympic runner and co-star in Prefontaine, co-wrote the screenplay for Without Limits. Prefontaine was filmed in Washington because the Without Limits production bought out Hayward Field for the summer to prevent its competition from shooting there. Kenny Moore also wrote a biography of Bill Bowerman, played in Without Limits by Donald Sutherland back in Eugene 20 years after he had appeared in Animal House. Moore had also had a role in Personal Best. Stealing Time, a 2003 independent film, was partially filmed in Eugene. When the film premiered in June 2001 at the Seattle International Film Festival, it was titled Rennie's Landing after a popular bar near the University of Oregon campus. The title was changed for its DVD release. Zerophilia was filmed in Eugene in 2006. The 2016 Tracktown was about a distance runner training for the Olympics in Eugene. Religious institutions of higher learning in Eugene include Bushnell University and New Hope Christian College. Bushnell University (formerly Northwest Christian University), founded in 1895, has ties with the Christian Church (Disciples of Christ). New Hope Christian College (formerly Eugene Bible College) originated with the Bible Standard Conference in 1915, which joined with Open Bible Evangelistic Association to create Open Bible Standard Churches in 1932. Eugene Bible College was started from this movement by Fred Hornshuh in 1925. There are two Eastern Orthodox Church parishes in Eugene: St John the Wonderworker Orthodox Christian Church in the Historic Whiteaker Neighborhood and Saint George Greek Orthodox Church. There are six Roman Catholic parishes in Eugene as well: St. Mary Catholic Church, St. Jude Catholic Church, St. Mark Catholic Church, St. Peter Catholic Church, St. Paul Catholic Church, and St. Thomas More Catholic Church. Eugene also has a Ukrainian Catholic Church named Nativity of the Mother of God. There is a mainline Protestant contingency in the city as well—such as the largest of the Lutheran Churches, Central Lutheran near the U of O Campus and the Episcopal Church of the Resurrection. The Eugene area has a sizeable LDS Church presence, with three stakes, consisting of 23 congregations (wards and branches). The Church of Jesus Christ announced plans in April 2020 to build a temple in Eugene. The greater Eugene-Springfield area also has a Jehovah's Witnesses presence with five Kingdom Halls, several having multiple congregations in one Kingdom Hall. The Reconstructionist Temple Beth Israel is Eugene's largest Jewish congregation. It was also, for many decades, Eugene's only synagogue, until Orthodox members broke away in 1992 and formed "Congregation Ahavas Torah". Eugene has a community of some 140 Sikhs, who have established a Sikh temple. The 340-member congregation of the Unitarian Universalist Church in Eugene (UUCE) purchased the former Eugene Scottish Rite Temple in May 2010, renovated it, and began services there in September 2012. Saraha Nyingma Buddhist Temple in Eugene opened in 2012 in the former site of the Unitarian Universalist Church. The First Congregational Church, UCC is a large progressive Christian Church with a long history of justice focused ministries and a very active membership. Three years ago, the congregation coordinated with the Connections Program of the St Vincent DePaul organization to provide transitional homes for two unhoused families on the church's property. Through life - skills support and training and a more stable housing situation these families are then able to make their way into independent living. Eugene markets itself as "Track Town USA". There are close links between the University of Oregon's successful track & field program, the Oregon Track Club, and Nike, Inc, who were founded by University of Oregon track athlete Phil Knight and his coach, Bill Bowerman. Eugene's miles of running trails, through its unusually large park system, are among the most extensive in the U.S. Notable trails include Pre's Trail in Alton Baker Park, Rexius Trail, the Adidas Oregon Trail, and the Ridgeline Trail. There is also an extensive network of trails along the Willamette River that reaches into neighboring Springfield, as well as along Amazon Creek in the southern and western parts of town. Jogging was introduced to the U.S. through Eugene, brought from New Zealand by Bill Bowerman, who wrote the best-selling book "Jogging", and coached the champion University of Oregon track and cross country teams. During Bowerman's tenure, his "Men of Oregon" won 24 individual NCAA titles, including titles in 15 out of the 19 events contested. During Bowerman's 24 years at Oregon, his track teams finished in the top ten at the NCAA championships 16 times, including four team titles (1962, '64, '65, '70), and two second-place trophies. His teams also posted a dual meet record of 114–20. Bowerman also invented the waffle sole for running shoes in Eugene, and with Oregon alumnus Phil Knight founded shoe giant Nike. The city has dozens of running clubs. The climate is cool and temperate, good both for jogging and record-setting. Eugene is home to the University of Oregon's Hayward Field track, which hosts numerous collegiate and amateur track and field meets throughout the year, most notably the Prefontaine Classic. Hayward Field was host to the 2004 AAU Junior Olympic Games, the 1989 World Masters Athletics Championships, the track and field events of the 1998 World Masters Games, the 2006 Pacific-10 track and field championships, the 1971, 1975, 1986, 1993, 1999, 2001, 2009, and 2011 USA Track & Field Outdoor Championships and the 1972, 1976, 1980, 2008, 2012, and 2016 U.S. Olympic trials. Eugene is the host of the delayed 2021 World Athletics Championships. The city bid for the 2019 event but lost narrowly to Doha, Qatar. Eugene's Oregon Ducks are part of the Pac-12 Conference (Pac-12). American football is especially popular, with intense rivalries between the Ducks and both the Oregon State University Beavers and the University of Washington Huskies. Autzen Stadium is home to Duck football, with a seating capacity of 54,000 but has had over 60,000 with standing room only. The basketball arena, McArthur Court, was built in 1926. The arena was replaced by the Matthew Knight Arena in late 2010. The Nationwide Tour's golfing event Oregon Classic takes place at Shadow Hills Country Club, just north of Eugene. The event has been played every year since 1998, except in 2001 when it was slated to begin the day after the 9/11 terrorist attacks. The top 20 players from the Nationwide Tour are promoted to the PGA Tour for the following year. Eugene is also home to the Eugene Emeralds, a short-season Class A minor-league baseball team. The "Ems" play their home games in PK Park, also the home of the University of Oregon baseball team. The Eugene Jr. Generals, a Tier III Junior "A" ice hockey team belonging to the Northern Pacific Hockey League (NPHL) consisting of 8 teams throughout Oregon and Washington, plays at the Lane County Ice Center. Lane United FC, a soccer club that participates in the Northwest Division of USL League Two, was founded in 2013 and plays its home games at Civic Park. The following table lists some sports clubs in Eugene and their usual home venue: Spencer Butte Park at the southern edge of town provides access to Spencer Butte, a dominant feature of Eugene's skyline. Hendricks Park, situated on a knoll to the east of downtown, is known for its rhododendron garden and nearby memorial to Steve Prefontaine, known as Pre's Rock, where the legendary University of Oregon runner was killed in an auto accident. Alton Baker Park, next to the Willamette River, contains Pre's Trail. Also next to the Willamette are Skinner Butte Park and the Owen Memorial Rose Garden, which contains more than 4,500 roses of over 400 varieties, as well as the 150-year-old Black Tartarian Cherry tree, an Oregon Heritage Tree. The city of Eugene maintains an urban forest. The University of Oregon campus is an arboretum, with over 500 species of trees. The city operates and maintains scenic hiking trails that pass through and across the ridges of a cluster of hills in the southern portion of the city, on the fringe of residential neighborhoods. Some trails allow biking, and others are for hikers and runners only. The nearest ski resort, Willamette Pass, is one hour from Eugene by car. On the way, along Oregon Route 58, are several reservoirs and lakes, the Oakridge mountain bike trails, hot springs, and waterfalls within Willamette National Forest. Eugene residents also frequent the Hoodoo and Mount Bachelor ski resorts. The Three Sisters Wilderness, the Oregon Dunes National Recreation Area, and Smith Rock are just a short drive away. In 1944, Eugene adopted a council–manager form of government, replacing the day-to-day management of city affairs by the part-time mayor and volunteer city council with a full-time professional city manager. The subsequent history of Eugene city government has largely been one of the dynamics—often contentious—between the city manager, the mayor and city council. According to statute, all Eugene and Lane County elections are officially non-partisan, with a primary containing all candidates in May. If a candidate gets more than 50% of the vote in the primary, they win the election outright, otherwise the top two candidates face off in a November runoff. This allows candidates to win seats during the lower-turnout primary election. The mayor of Eugene is Lucy Vinis, who has been in office since winning the popular vote in May 2016, and who was re-elected in May 2020. Recent mayors include Edwin Cone (1958–69), Les Anderson (1969–77) Gus Keller (1977–84), Brian Obie (1985–88), Jeff Miller (1989–92), Ruth Bascom (1993–96), Jim Torrey (1997–2004) and Kitty Piercy (2005-2017). Mayor: Lucy Vinis The Eugene Police Department is the city's law enforcement and public safety agency. The Lane County Sheriff's Office also has its headquarters in Eugene. The University of Oregon is served by the University of Oregon Police Department, and Eugene Police Department also has a police station in the West University District near campus. Lane Community College is served by the Lane Community College Public Safety Department. The Oregon State Police have a presence in the rural areas and highways around the Eugene metro area. The LTD downtown station, and the EmX lines are patrolled by LTD Transit Officers. Since 1989 the mental health crisis intervention non-governmental agency CAHOOTS has responded to Eugene's mental health 911 calls. Eugene-Springfield Fire Department is the agency responsible for emergency medical services, fire suppression, HAZMAT operations and water/Confined spaces rescues in the combined Eugene-Springfield metropolitan area. Eugene used to have an ordinance which prohibited car horn usage for non-driving purposes. After several residents were cited for this offense during the anti-Gulf War demonstrations in January 1991, the city was taken to court and in 1992 the Oregon Court of Appeals overturned the ordinance, finding it unconstitutionally vague. Eugene City Hall was abandoned in 2012 for reasons of structural integrity, energy efficiency, and obsolete size. Various offices of city government became tenants in eight other buildings. Being the largest city by far in Lane County, Eugene's voters almost always decide the county's partisan tilt. While Eugene has historically been a counter-culture-heavy and left-leaning college town, the county's partisan leanings have intensified in recent decades, mirroring the general polarization of Oregon voters along urban (pro-Democratic) and rural (pro-Republican) lines. Lane County voted for Bernie Sanders over eventual 2016 nominee Hillary Clinton by 60.6-38.1%, and Eugene offered Sanders an even larger share of its vote. Eugene is home to the University of Oregon. Other institutions of higher learning include Bushnell University, Lane Community College, New Hope Christian College, Gutenberg College, and Pacific University's Eugene campus. The Eugene School District includes four full-service high schools (Churchill, North Eugene, Sheldon, and South Eugene) and many alternative education programs, such as international schools and charter schools. Foreign language immersion programs in the district are available in Spanish, French, Chinese, and Japanese. The Bethel School District serves children in the Bethel neighborhood on the northwest edge of Eugene. The district is home to the traditional Willamette High School and the alternative Kalapuya High School. There are 11 schools in this district. Eugene also has several private schools, including the Eugene Waldorf School, the Outdoor High School, Eugene Montessori, Far Horizon Montessori, Eugene Sudbury School, Wellsprings Friends School, Oak Hill School, and The Little French School. Parochial schools in Eugene include Marist Catholic High School, O'Hara Catholic Elementary School, Eugene Christian School, and St. Paul Parish School. The largest library in Oregon is the University of Oregon's Knight Library, with collections totaling more than 3 million volumes and over 100,000 audio and video items. The Eugene Public Library moved into a new, larger building downtown in 2002. The four-story library is an increase from 38,000 to 130,000 square feet (3,500 to 12,100 m). There are also two branches of the Eugene Public Library, the Sheldon Branch Library in the neighborhood of Cal Young/Sheldon, and the Bethel Branch Library, in the neighborhood of Bethel. Eugene also has the Lane County Law Library. The largest newspaper serving the area is The Register-Guard, a daily newspaper with a circulation of about 70,000, published independently by the Baker family of Eugene until 2018, before being acquired by GateHouse Media, (now owned by Gannett Company). Other newspapers serving the area include the Eugene Weekly, the Emerald, the student-run independent newspaper at the University of Oregon, now published on Mondays and Thursdays;The Torch, the student-run newspaper at Lane Community College, the Ignite, the newspaper at New Hope Christian College and The Beacon Bolt, the student-run newspaper at Bushnell University. Eugene Magazine, Lifestyle Quarterly, Eugene Living, and Sustainable Home and Garden magazines also serve the area. Adelante Latino is a Spanish language newspaper in Eugene that serves all of Lane County. Local television stations include KMTR (NBC), KVAL (CBS), KLSR-TV (Fox), KEVU-CD, KEZI (ABC), KEPB (PBS), and KTVC (independent). The local NPR affiliates are KOPB, and KLCC. Radio station KRVM-AM is an affiliate of Jefferson Public Radio, based at Southern Oregon University. The Pacifica Radio affiliate is the University of Oregon student-run radio station, KWVA. Additionally, the community supports two other radio stations: KWAX (classical) and KRVM-FM (alternative). AM stations FM stations Lane Transit District (LTD), a public transportation agency formed in 1970, covers 240 square miles (620 km) of Lane County, including Creswell, Cottage Grove, Junction City, Veneta, and Blue River. Operating more than 90 buses during peak hours, LTD carries riders on 3.7 million trips every year. LTD also operates a bus rapid transit line that runs between Eugene and Springfield—Emerald Express (EmX)—much of which runs in its own lane, with stations providing for off-board fare payment. LTD's main terminus in Eugene is at the Eugene Station. LTD also offers paratransit. Greyhound Lines provides service between Los Angeles and Portland on the I-5 corridor. Cycling is popular in Eugene and many people commute via bicycle. Summertime events and festivals frequently have valet bicycle parking corrals that are often filled to capacity by three hundred or more bikes. Many people commute to work by bicycle every month of the year. PeaceHealth Rides, a bike share system formerly operated by Uber subsidiary JUMP, and currently operated by non-profit Cascadia Mobility, offers 300 city-owned bicycles available to the public for a small fee. Bike trails take commuting and recreational bikers along the Willamette River past a scenic rose garden, along Amazon Creek, through the downtown, and through the University of Oregon campus. Eugene is close to many popular mountain bike trails, and Disciples of Dirt is the local mountain bike club that organizes group rides and promotes trail stewardship. In 2009, the League of American Bicyclists cited Eugene as 1 of 10 "Gold-level" cities in the U.S. because of its "remarkable commitments to bicycling." In 2010, Bicycling magazine named Eugene the 5th most bike-friendly city in America. The U.S. Census Bureau's annual American Community Survey reported that Eugene had a bicycle commuting mode share of 7.3% in 2011, the fifth highest percentage nationwide among U.S. cities with 65,000 people or more, and 13 times higher than the national average of 0.56%. The 1908 Amtrak depot downtown was restored in 2004; it is the southern terminus for two daily runs of the Amtrak Cascades, and a stop along the route in each direction for the daily Coast Starlight. Air travel is served by the Eugene Airport, also known as Mahlon Sweet Field, which is the fifth largest airport in the Northwest and second largest airport in Oregon. The Eugene Metro area also has numerous private airports. The Eugene Metro area also has several heliports, such as the Sacred Heart Medical Center Heliport and Mahlon Sweet Field Heliport, and many single helipads. Highways traveling within and through Eugene include: Eugene is the home of Oregon's largest publicly owned water and power utility, the Eugene Water & Electric Board (EWEB). EWEB got its start in the first decade of the 20th century, after an epidemic of typhoid found in the groundwater supply. The City of Eugene condemned Eugene's private water utility and began treating river water (first the Willamette; later the McKenzie) for domestic use. EWEB got into the electric business when power was needed for the water pumps. Excess electricity generated by the EWEB's hydropower plants was used for street lighting. Natural gas service is provided by NW Natural. Wastewater treatment services are provided by the Metropolitan Wastewater Management Commission, a partnership between the Cities of Eugene and Springfield and Lane County. Three hospitals serve the Eugene-Springfield area. Sacred Heart Medical Center University District is the only one within Eugene city limits. McKenzie-Willamette Medical Center and Sacred Heart Medical Center at RiverBend are in Springfield. Oregon Medical Group, a primary care based multi-specialty group, operates several clinics in Eugene, as does PeaceHealth Medical Group. White Bird Clinic provides a broad range of health and human services, including low-cost clinics. The Volunteers in Medicine & Occupy Medical clinics provide free medical and mental care to low-income adults without health insurance. Eugene is one of the few municipalities in the US that does not fluoridate its water supply. Eugene has four sister cities:
[ { "paragraph_id": 0, "text": "Eugene (/juːˈdʒiːn/ yoo-JEEN) is a city in and the county seat of Lane County, Oregon, United States. It is located at the southern end of the Willamette Valley, near the confluence of the McKenzie and Willamette rivers, about 50 miles (80 km) east of the Oregon Coast.", "title": "" }, { "paragraph_id": 1, "text": "The second-most populous city in Oregon, Eugene had a population of 176,654 as of the 2020 United States census and it covers city area of 44.21 sq mi (114.5 km). The Eugene-Springfield metropolitan statistical area is the second largest in Oregon behind Portland. In 2022, Eugene's population was estimated to have reached 179,887.", "title": "" }, { "paragraph_id": 2, "text": "Eugene is home to the University of Oregon, Bushnell University, and Lane Community College. The city is noted for its natural environment, recreational opportunities (especially bicycling, running/jogging, rafting, and kayaking), and focus on the arts, along with its history of civil unrest, protests, and green activism. Eugene's official slogan is \"A Great City for the Arts and Outdoors\". It is also referred to as the \"Emerald City\" and as \"Track Town, USA\". The Nike corporation had its beginnings in Eugene. In July 2022, the city hosted the 18th World Athletics Championship.", "title": "" }, { "paragraph_id": 3, "text": "The first people to settle in the Eugene area were the Kalapuyans, also written Calapooia or Calapooya. They made \"seasonal rounds,\" moving around the countryside to collect and preserve local foods, including acorns, the bulbs of the wapato and camas plants, and berries. They stored these foods in their permanent winter village. When crop activities waned, they returned to their winter villages and took up hunting, fishing, and trading. They were known as the Chifin Kalapuyans and called the Eugene area where they lived \"Chifin\", sometimes recorded as \"Chafin\" or \"Chiffin\".", "title": "History" }, { "paragraph_id": 4, "text": "Other Kalapuyan tribes occupied villages that are also now within Eugene city limits. Pee-you or Mohawk Calapooians, Winefelly or Pleasant Hill Calapooians, and the Lungtum or Long Tom. They were close-neighbors to the Chifin, intermarried, and were political allies. Some authorities suggest the Brownsville Kalapuyans (Calapooia Kalapuyans) were related to the Pee-you. It is likely that since the Santiam had an alliance with the Brownsville Kalapuyans that the Santiam influence also went as far at Eugene.", "title": "History" }, { "paragraph_id": 5, "text": "According to archeological evidence, the ancestors of the Kalapuyans may have been in Eugene for as long as 10,000 years. In the 1800s their traditional way of life faced significant changes due to devastating epidemics and settlement, first by French fur traders and later by an overwhelming number of American settlers.", "title": "History" }, { "paragraph_id": 6, "text": "French fur traders had settled seasonally in the Willamette Valley by the beginning of the 19th century. Their settlements were concentrated in the \"French Prairie\" community in Northern Marion County but may have extended south to the Eugene area. Having already developed relationships with Native communities through intermarriage and trade, they negotiated for land from the Kalapuyans. By 1828 to 1830 they and their Native wives began year-round occupation of the land, raising crops and tending animals. In this process, the mixed race families began to impact Native access to land, food supply, and traditional materials for trade and religious practices.", "title": "History" }, { "paragraph_id": 7, "text": "In July 1830, \"intermittent fever\" struck the lower Columbia region and a year later, the Willamette Valley. Natives traced the arrival of the disease, then new to the Pacific Northwest, to the USS Owyhee, captained by John Dominis. \"Intermittent fever\" is thought by researchers now to be malaria. According to Robert T. Boyd, an anthropologist at Portland State University, the first three years of the epidemic, \"probably constitute the single most important epidemiological event in the recorded history of what would eventually become the state of Oregon\". In his book The Coming of the Spirit Pestilence Boyd reports there was a 92% population loss for the Kalapuyans between 1830 and 1841. This catastrophic event shattered the social fabric of Kalapuyan society and altered the demographic balance in the Valley. This balance was further altered over the next few years by the arrival of Anglo-American settlers, beginning in 1840 with 13 people and growing steadily each year until within 20 years more than 11,000 American settlers, including Eugene Skinner, had arrived.", "title": "History" }, { "paragraph_id": 8, "text": "As the demographic pressure from the settlers grew, the remaining Kalapuyans were forcibly removed to Indian reservations. Though some Natives avoided transfer into the reservation, most were moved to the Grand Ronde reservation in 1856. Strict racial segregation was enforced and mixed race people, known as Métis in French, had to make a choice between the reservation and Anglo-American society. Native Americans could not leave the reservation without traveling papers and white people could not enter the reservation.", "title": "History" }, { "paragraph_id": 9, "text": "Eugene Franklin Skinner, after whom Eugene is named, arrived in the Willamette Valley in 1846 with 1,200 other settlers that year. Advised by the Kalapuyans to build on high ground to avoid flooding, he erected the first pioneer cabin on south or west slope of what the Kalapuyans called Ya-po-ah. The \"isolated hill\" is now known as Skinner's Butte. The cabin was used as a trading post and was registered as an official post office on January 8, 1850.", "title": "History" }, { "paragraph_id": 10, "text": "At this time the settlement was known by settlers as Skinner's Mudhole. It was relocated in 1853 and named Eugene City in 1853. Formally incorporated as a city in 1862, it was named simply Eugene in 1889. Skinner ran a ferry service across the Willamette River where the Ferry Street Bridge now stands.", "title": "History" }, { "paragraph_id": 11, "text": "The first major educational institution in the area was Columbia College, founded a few years earlier than the University of Oregon. It fell victim to two major fires in four years, and after the second fire, the college decided not to rebuild again. The part of south Eugene known as College Hill was the former location of Columbia College. There is no college there today.", "title": "History" }, { "paragraph_id": 12, "text": "The town raised the initial funding to start a public university, which later became the University of Oregon, with the hope of turning the small town into a center of learning. In 1872, the Legislative Assembly passed a bill creating the University of Oregon as a state institution. Eugene bested the nearby town of Albany in the competition for the state university. In 1873, community member J.H.D. Henderson donated the hilltop land for the campus, overlooking the city. The university first opened in 1876 with the regents electing the first faculty and naming John Wesley Johnson as president. The first students registered on October 16, 1876. The first building was completed in 1877; it was named Deady Hall in honor of the first Board of Regents President and community leader Judge Matthew P. Deady.", "title": "History" }, { "paragraph_id": 13, "text": "Other universities in Eugene include Bushnell University and New Hope Christian College.", "title": "History" }, { "paragraph_id": 14, "text": "Eugene grew rapidly throughout most of the twentieth century, with the exception being the early 1980s when a downturn in the timber industry caused high unemployment. By 1985, the industry had recovered and Eugene began to attract more high-tech industries, earning it the moniker the \"Emerald Shire\". In 2012, Eugene and the surrounding metro area was dubbed the Silicon shire.", "title": "History" }, { "paragraph_id": 15, "text": "The first Nike shoe was used in 1972 during the US Olympic trials held in Eugene.", "title": "History" }, { "paragraph_id": 16, "text": "The 1970s saw an increase in community activism. Local activists stopped a proposed freeway and lobbied for the construction of the Washington Jefferson Park beneath the Washington-Jefferson Street Bridge. Community Councils soon began to form as a result of these efforts. A notable impact of the turn to community-organized politics came with Eugene Local Measure 51, a ballot measure in 1978 that repealed a gay rights ordinance approved by the Eugene City Council in 1977 that prohibited discrimination by sexual orientation. Eugene is also home to Beyond Toxics, a nonprofit environmental justice organization founded in 2000.", "title": "History" }, { "paragraph_id": 17, "text": "One hotspot for protest activity since the 1990s has been the Whitaker district, located in the northwest of downtown Eugene. Whitaker is primarily a working-class neighborhood that has become a cultural hub, center of community and activism and home to alternative artists. It saw an increase of activity in the 1990s after many young people drawn to Eugene's political climate relocated there. Animal rights groups have had a heavy presence in the Whiteaker, and several vegan restaurants are located there. According to David Samuels, the Animal Liberation Front and the Earth Liberation Front have had an underground presence in the neighborhood. The neighborhood is home to a number of communal apartment buildings, which are often organized by anarchist or environmentalist groups. Local activists have also produced independent films and started art galleries, community gardens, and independent media outlets. Copwatch, Food Not Bombs, and Critical Mass are also active in the neighborhood.", "title": "History" }, { "paragraph_id": 18, "text": "According to the United States Census Bureau, the city has a total area of 43.74 square miles (113.29 km), of which 43.72 square miles (113.23 km) is land and 0.02 square miles (0.05 km) is water. Eugene is at an elevation of 426 feet (130 m).", "title": "Geography" }, { "paragraph_id": 19, "text": "To the north of downtown is Skinner Butte. Northeast of the city are the Coburg Hills. Spencer Butte is a prominent landmark south of the city. Mount Pisgah is southeast of Eugene and includes the Mount Pisgah Arboretum and the Howard Buford Recreation Area, a Lane County Park. Eugene is surrounded by foothills and forests to the south, east, and west, while to the north the land levels out into the Willamette Valley and consists of mostly farmland.", "title": "Geography" }, { "paragraph_id": 20, "text": "The Willamette and McKenzie Rivers run through Eugene and its neighboring city, Springfield. Another important stream is Amazon Creek, whose headwaters are near Spencer Butte. The creek discharges into the Long Tom River north Fern Ridge Reservoir, maintained for winter flood control by the Army Corps of Engineers. The Eugene Yacht Club hosts a sailing school and sailing regattas at Fern Ridge during summer months.", "title": "Geography" }, { "paragraph_id": 21, "text": "Eugene has 23 neighborhood associations:", "title": "Geography" }, { "paragraph_id": 22, "text": "The River Road and Santa Clara sections, which make up the northwestern part of the city, are within the urban growth boundary and generally perceived as part of Eugene, but are largely outside of the city limits.", "title": "Geography" }, { "paragraph_id": 23, "text": "Like the rest of the Willamette Valley, Eugene lies in the Marine West Coast climate zone, with Mediterranean characteristics. Under the Köppen climate classification scheme, Eugene has a warm-summer Mediterranean climate (Köppen: Csb). Temperatures can vary from cool to warm, with warm, dry summers and cool, wet winters. Spring and fall are also moist seasons, with light rain falling for long periods. The average rainfall is 40.83 inches (1,040 mm), with the wettest \"rain year\" being from July 1973 to June 1974 with 75.59 inches (1,920.0 mm) and the driest from July 2000 to June 2001 with 20.40 inches (518.2 mm). Measurements taken by NOAA over the past four decades have indicated a significant decline in average annual precipitation. From 1981 to 2010 inclusive, the reported annual average precipitation was 46.1 inches (1,170 mm), but for the thirty-year period ending in 2020, the annual average had declined 5.27 inches (134 mm), to 40.83 inches (1,040 mm). The figures from the second half of that period, or 2006 - 2020 inclusive, pointed to a further decline of more than 4 inches (102 mm), down to an annual average of 36.58 inches (929 mm).", "title": "Geography" }, { "paragraph_id": 24, "text": "Winter snowfall does occur, but it is sporadic and rarely accumulates in large amounts: the normal seasonal amount is 4.9 inches (12 cm), but the median is zero. The record snowfall was 41.7 inches (106 cm) of accumulation due to a pineapple express on January 25–29, 1969. Ice storms, like snowfall, are rare, but occur sporadically.", "title": "Geography" }, { "paragraph_id": 25, "text": "The hottest months are July and August, with a normal monthly mean temperature of 67.8 to 67.9 °F (19.9 to 19.9 °C), with an average of 16 days per year reaching 90 °F (32 °C). The coolest month is December, with a mean temperature of 40.6 °F (4.8 °C), and there are 52 mornings per year with a low at or below freezing, and 2 afternoons with highs not exceeding the freezing mark. The coldest daytime high of the year averages 32 °F (0 °C), reaching the freezing point.", "title": "Geography" }, { "paragraph_id": 26, "text": "Eugene's average annual temperature is 53.1 °F (11.7 °C), and annual precipitation at 40.83 inches (1,040 mm). Eugene is slightly cooler on average than Portland. Despite being located about 100 miles (160 km) south and at an only slightly higher elevation, Eugene has a more continental climate than Portland, less subject to the maritime air that blows inland from the Pacific Ocean via the Columbia River. Eugene's normal annual mean minimum is 41.9 °F (5.5 °C), compared to 46.2 °F (7.9 °C) in Portland; in August, the gap in the normal mean minimum widens to 51.1 and 58.0 °F (10.6 and 14.4 °C) for Eugene and Portland, respectively. Eugene's warmest night annually averages a modest 62 °F (17 °C). Average winter temperatures (and summer high temperatures) are similar for the two cities.", "title": "Geography" }, { "paragraph_id": 27, "text": "Extreme temperatures range from −12 °F (−24 °C), recorded on December 8, 1972, to 111 °F (44 °C) on June 27, 2021; the record cold daily maximum is 19 °F (−7 °C), recorded on December 13, 1919, while, conversely, the record warm daily minimum is 71 °F (22 °C) on July 22, 2006.", "title": "Geography" }, { "paragraph_id": 28, "text": "Eugene is downwind of Willamette Valley grass seed farms. The combination of summer grass pollen and the confining shape of the hills around Eugene make it \"the area of the highest grass pollen counts in the USA (>1,500 pollen grains/m of air).\" These high pollen counts have led to difficulties for some track athletes who compete in Eugene. In the Olympic trials in 1972, \"Jim Ryun won the 1,500 after being flown in by helicopter because he was allergic to Eugene's grass seed pollen.\" Further, six-time Olympian Maria Mutola abandoned Eugene as a training area \"in part to avoid allergies\".", "title": "Geography" }, { "paragraph_id": 29, "text": "According to the 2010 census, Eugene's population was 156,185. The population density was 3,572.2 people per square mile. There were 69,951 housing units at an average density of 1,600 per square mile. Those age 18 and over accounted for 81.8% of the total population.", "title": "Demographics" }, { "paragraph_id": 30, "text": "The racial makeup of the city was 85.8% White, 4.0% Asian, 1.4% Black or African American, 1.0% Native American, 0.2% Pacific Islander, and 4.7% from other races.", "title": "Demographics" }, { "paragraph_id": 31, "text": "Hispanics and Latinos of any race accounted for 7.8% of the total population. Of the non-Hispanics, 82% were White, 1.3% Black or African American, 0.8% Native American, 4% Asian, 0.2% Pacific Islander, 0.2% some other race alone, and 3.4% were of two or more races.", "title": "Demographics" }, { "paragraph_id": 32, "text": "Females represented 51.1% of the total population, and males represented 48.9%. The median age in the city was 33.8 years.", "title": "Demographics" }, { "paragraph_id": 33, "text": "The census of 2000 showed there were 137,893 people, 58,110 households, and 31,321 families residing in the city of Eugene. The population density was 3,404.8 people per square mile (1,314.6 people/km). There were 61,444 housing units at an average density of 1,516.4 per square mile (585.5/km). The racial makeup of the city was 88.15% White, down from 99.5% in 1950, 3.57% Asian, 1.25% Black or African American, 0.93% Native American, 0.21% Pacific Islander, 2.18% from other races, and 3.72% from two or more races. 4.96% of the population were Hispanic or Latino of any race.", "title": "Demographics" }, { "paragraph_id": 34, "text": "There were 58,110 households, of which 25.8% had children under the age of 18 living with them, 40.6% were married couples living together, 9.7% had a female householder with no husband present, and 46.1% were non-families. 31.7% of all households were made up of individuals, and 9.4% had someone living alone who was 65 years of age or older. The average household size was 2.27 and the average family size was 2.87. In the city, the population was 20.3% under the age of 18, 17.3% from 18 to 24, 28.5% from 25 to 44, 21.8% from 45 to 64, and 12.1% who were 65 years of age or older. The median age was 33 years. For every 100 females, there were 96.0 males. For every 100 females age 18 and over, there were 94.0 males. The median income for a household in the city was $35,850, and the median income for a family was $48,527. Males had a median income of $35,549 versus $26,721 for females. The per capita income for the city was $21,315. About 8.7% of families and 17.1% of the population were below the poverty line, including 14.8% of those under age 18 and 7.1% of those age 65 or over.", "title": "Demographics" }, { "paragraph_id": 35, "text": "Eugene's largest employers are PeaceHealth Medical Group, the University of Oregon, and the Eugene School District. Eugene's largest industries are wood products manufacturing and recreational vehicle manufacturing.", "title": "Economy" }, { "paragraph_id": 36, "text": "Corporate headquarters for the employee-owned Bi-Mart corporation and family-owned supermarket Market of Choice remain in Eugene.", "title": "Economy" }, { "paragraph_id": 37, "text": "Many multinational businesses were launched in Eugene. Some of the most famous include Nike, Taco Time, and Brøderbund Software.", "title": "Economy" }, { "paragraph_id": 38, "text": "The footwear repair product Shoe Goo is manufactured by Eclectic Products, based in Eugene.", "title": "Economy" }, { "paragraph_id": 39, "text": "Run Gum, an energy gum created for runners, also began its life in Eugene. Run Gum was created by track athlete Nick Symmonds and track and field coach Sam Lapray in 2014.", "title": "Economy" }, { "paragraph_id": 40, "text": "Burley Design LLC produces bicycle trailers and was founded in Eugene by Alan Scholz out of a Saturday Market business in 1978. Eugene is also the birthplace and home of Bike Friday bicycle manufacturer Green Gear Cycling.", "title": "Economy" }, { "paragraph_id": 41, "text": "Organically Grown Company, the largest distributor of organic fruits and vegetables in the northwest, started in Eugene in 1978 as a non-profit co-op for organic farmers. Notable local food processors, many of whom manufacture certified organic products, include Golden Temple (Yogi Tea), Merry Hempsters, Springfield Creamery (Nancy's Yogurt), and Mountain Rose Herbs.", "title": "Economy" }, { "paragraph_id": 42, "text": "Until July 2008, Hynix Semiconductor America had operated a large semiconductor plant in west Eugene. In late September 2009, Uni-Chem of South Korea announced its intention to purchase the Hynix site for solar cell manufacturing. However, this deal fell through and as of late 2012, is no longer planned. In 2015, semiconductor manufacturer Broadcom purchased the plant with plans to upgrade and reopen it. The company abandoned these plans and put it up for sale in November 2016.", "title": "Economy" }, { "paragraph_id": 43, "text": "Luckey's Club Cigar Store is one of the oldest bars in Oregon. Tad Luckey Sr. purchased it in 1911, making it one of the oldest businesses in Eugene. The \"Club Cigar\", as it was called in the late 19th century, was for many years a men-only salon. It survived both the Great Depression and Prohibition, partly because Eugene was a \"dry town\" before the end of Prohibition.", "title": "Economy" }, { "paragraph_id": 44, "text": "The city has over 25 breweries, offers a variety of dining options with a local focus; the city is surrounded by wineries. The most notable fungi here is the truffle; Eugene hosts the annual Oregon Truffle Festival in January.", "title": "Economy" }, { "paragraph_id": 45, "text": "In 2012, the Eugene metro region was dubbed the Silicon Shire for its growing tech industry.", "title": "Economy" }, { "paragraph_id": 46, "text": "According to Eugene's 2017 Comprehensive Annual Financial Report, the city's top employers are:", "title": "Economy" }, { "paragraph_id": 47, "text": "Eugene has a growing problem with homelessness. The problem has been referenced in popular culture, including in the episode The 30% Iron Chef in Futurama. During the COVID-19 pandemic, the city experienced a controversy over its continuing policy of homeless removal, despite CDC guidelines to not engage in homeless removal.", "title": "Economy" }, { "paragraph_id": 48, "text": "Eugene has a significant population of people in pursuit of alternative ideas and a large original hippie population. Beginning in the 1960s, the countercultural ideas and viewpoints espoused by area native Ken Kesey became established as the seminal elements of the vibrant social tapestry that continue to define Eugene. The Merry Prankster, as Kesey was known, has arguably left the most indelible imprint of any cultural icon in his hometown. He is best known as the author of One Flew Over the Cuckoo's Nest and as the male protagonist in Tom Wolfe's The Electric Kool-Aid Acid Test.", "title": "Arts and culture" }, { "paragraph_id": 49, "text": "In 2005, the city council unanimously approved a new slogan for the city: \"World's Greatest City for the Arts & Outdoors\". While Eugene has a vibrant arts community for a city its size, and is well situated near many outdoor opportunities, this slogan was frequently criticized by locals as embarrassing and ludicrous. In early 2010, the slogan was changed to \"A Great City for the Arts & Outdoors.\"", "title": "Arts and culture" }, { "paragraph_id": 50, "text": "Eugene's Saturday Market, open every Saturday from April through November, was founded in 1970 as the first \"Saturday Market\" in the United States. It is adjacent to the Lane County Farmer's Market in downtown Eugene. All vendors must create or grow all their own products. The market reappears as the \"Holiday Market\" between Thanksgiving and New Year's in the Lane County Events Center at the fairgrounds.", "title": "Arts and culture" }, { "paragraph_id": 51, "text": "Eugene is noted for its \"community inventiveness.\" Many U.S. trends in community development originated in Eugene. The University of Oregon's participatory planning process, known as The Oregon Experiment, was the result of student protests in the early 1970s. The book of the same name is a major document in modern enlightenment thinking in planning and architectural circles. The process, still used by the university in modified form, was created by Christopher Alexander, whose works also directly inspired the creation of the Wiki. Some research for the book A Pattern Language, which inspired the Design Patterns movement and Extreme Programming, was done by Alexander in Eugene. Not coincidentally, those engineering movements also had origins here. Decades after its publication, A Pattern Language is still one of the best-selling books on urban design.", "title": "Arts and culture" }, { "paragraph_id": 52, "text": "In the 1970s, Eugene was packed with cooperative and community projects. It still has small natural food stores in many neighborhoods, some of the oldest student cooperatives in the country, and alternative schools have been part of the school district since 1971. The old Grower's Market, downtown near the Amtrak depot, is the only food cooperative in the U.S. with no employees. It is possible to see Eugene's trend-setting non-profit tendencies in much newer projects, such as Square One Villages and the Center for Appropriate Transport. In 2006, an initiative began to create a tenant-run development process for downtown Eugene.", "title": "Arts and culture" }, { "paragraph_id": 53, "text": "In the fall of 2003, neighbors noticed \"an unassuming two-acre remnant orchard tucked into the Friendly Area Neighborhood\" had been put up for sale by its owner, a resident of New York City. Learning a prospective buyer had plans to build several houses on the property, they formed a nonprofit organization called Madison Meadow in June 2004 in order to buy the property and \"preserve it as undeveloped space in perpetuity.\" In 2007 their effort was named Third Best Community Effort by the Eugene Weekly, and by the end of 2008 they had raised enough money to purchase the property.", "title": "Arts and culture" }, { "paragraph_id": 54, "text": "The City of Eugene has an active Neighborhood Program. Several neighborhoods are known for their green activism. Friendly Neighborhood has a highly popular neighborhood garden established on the right of way of a street never built. There are a number of community gardens on public property. Amazon Neighborhood has a former church turned into a community center. Whiteaker hosts a housing co-op that dates from the early 1970s that has re-purposed both their parking lots into food production and play space. An unusual eco-village with natural building techniques and large shared garden can be found in Jefferson Westside neighborhood. A several block area in the River Road Neighborhood is known as a permaculture hotspot with an increasing number of suburban homes trading grass for garden, installing rain water catchment systems, food producing landscapes and solar retrofits. Several sites have planted gardens by removing driveways. Citizen volunteers are working with the City of Eugene to restore a 65-tree filbert grove on public property. There are deepening social and economic networks in the neighborhood.", "title": "Arts and culture" }, { "paragraph_id": 55, "text": "Eugene museums include the University of Oregon's Jordan Schnitzer Museum of Art and Museum of Natural and Cultural History, the Oregon Air and Space Museum, Lane County History Museum, Maude Kerns Art Center, Shelton McMurphey Johnson House, and the Eugene Science Center.", "title": "Arts and culture" }, { "paragraph_id": 56, "text": "Eugene is home to numerous cultural organizations, including the Eugene Symphony (whose previous music directors include Marin Alsop, Giancarlo Guerrero, and Miguel Harth-Bedoya); the Eugene Ballet, a professional full-time touring company; the Eugene Opera, the Eugene Concert Choir, the Bushnell University Community Choir, the Oregon Mozart Players, the Oregon Bach Festival, the Oregon Children's Choir, the Eugene-Springfield Youth Orchestras, Ballet Fantastique and Oregon Festival of American Music. Principal performing arts venues include the Hult Center for the Performing Arts, The John G. Shedd Institute for the Arts (\"The Shedd\"), the McDonald Theatre, and W.O.W. Hall.", "title": "Arts and culture" }, { "paragraph_id": 57, "text": "The University of Oregon School of Music and Dance also attracts world class performers and teaching artists throughout the year, many of whom perform at Beall Concert Hall. The university campus also frequently hosts performances at Matthew Knight Arena and the Erb Memorial Union ballroom.", "title": "Arts and culture" }, { "paragraph_id": 58, "text": "A number of live theater groups are based in Eugene, including Free Shakespeare in the Park, Oregon Contemporary Theatre, The Very Little Theatre, Actors Cabaret, LCC Theatre, Rose Children's Theatre, and University Theatre. Each has its own performance venue.", "title": "Arts and culture" }, { "paragraph_id": 59, "text": "Because of its status as a college town, Eugene has been home to many music genres, musicians and bands, ranging from electronic dance music such as dubstep and drum and bass to garage rock, hip hop, folk and heavy metal. Eugene also has growing reggae and street-performing bluegrass and jug band scenes. Multi-genre act the Cherry Poppin' Daddies became a prominent figure in Eugene's music scene and became the house band at Eugene's W.O.W. Hall. In the late 1990s, their contributions to the swing revival movement propelled them to national stardom. Rock band Floater originated in Eugene as did the Robert Cray blues band. Doom metal band YOB is among the leaders of the Eugene heavy music scene.", "title": "Arts and culture" }, { "paragraph_id": 60, "text": "Eugene is home to \"Classical Gas\" Composer and two-time Grammy award winner Mason Williams who spent his years as a youth living between his parents in Oakridge, Oregon and Oklahoma. Mason Williams puts on a yearly Christmas show at the Hult center for performing arts with a full orchestra produced by author, audio engineer and University of Oregon professor Don Latarski.", "title": "Arts and culture" }, { "paragraph_id": 61, "text": "Dick Hyman, noted jazz pianist and musical director for many of Woody Allen's films, designs and hosts the annual Now Hear This! jazz festival at the Oregon Festival of American Music (OFAM). OFAM and the Hult Center routinely draw major jazz talent for concerts.", "title": "Arts and culture" }, { "paragraph_id": 62, "text": "Eugene is also home to a large Zimbabwean music community. Kutsinhira Cultural Arts Center, which is \"dedicated to the music and people of Zimbabwe,\" is based in Eugene.", "title": "Arts and culture" }, { "paragraph_id": 63, "text": "Eugene's visual arts community is supported by over 20 private art galleries and several organizations, including Maude Kerns Art Center, Lane Arts Council, DIVA (the Downtown Initiative for the Visual Arts) and the Eugene Glass School.", "title": "Arts and culture" }, { "paragraph_id": 64, "text": "In 2015 installations from a group of Eugene-based artists known as Light At Play were showcased in several events around the world as part of the International Year of Light, including displays at the Smithsonian and the National Academy of Sciences.", "title": "Arts and culture" }, { "paragraph_id": 65, "text": "The Eugene area has been used as a filming location for several Hollywood films, most famously for 1978's National Lampoon's Animal House, which was also filmed in nearby Cottage Grove. John Belushi had the idea for the film The Blues Brothers during filming of Animal House when he happened to meet Curtis Salgado at what was then the Eugene Hotel.", "title": "Arts and culture" }, { "paragraph_id": 66, "text": "Getting Straight, starring Elliott Gould and Candice Bergen, was filmed at Lane Community College in 1969. As the campus was still under construction at the time, the \"occupation scenes\" were easier to shoot.", "title": "Arts and culture" }, { "paragraph_id": 67, "text": "The \"Chicken Salad on Toast\" scene in the 1970 Jack Nicholson movie Five Easy Pieces was filmed at the Denny's restaurant at the southern I-5 freeway interchange near Glenwood. Nicholson directed the 1971 film Drive, He Said in Eugene.", "title": "Arts and culture" }, { "paragraph_id": 68, "text": "How to Beat the High Co$t of Living, starring Jane Curtin, Jessica Lange and Susan St. James, was filmed in Eugene in the fall of 1979. Locations visible in the film include Valley River Center (which is a driving force in the plot), Skinner Butte and Ya-Po-Ah Terrace, the Willamette River and River Road Hardware.", "title": "Arts and culture" }, { "paragraph_id": 69, "text": "Several track and field movies have used Eugene as a setting and/or a filming location. Personal Best, starring Mariel Hemingway, was filmed in Eugene in 1982. The film centered on a group of women who are trying to qualify for the Olympic track and field team. Two track and field movies about the life of Steve Prefontaine, Prefontaine and Without Limits, were released within a year of each other in 1997–1998. Kenny Moore, Eugene-trained Olympic runner and co-star in Prefontaine, co-wrote the screenplay for Without Limits. Prefontaine was filmed in Washington because the Without Limits production bought out Hayward Field for the summer to prevent its competition from shooting there. Kenny Moore also wrote a biography of Bill Bowerman, played in Without Limits by Donald Sutherland back in Eugene 20 years after he had appeared in Animal House. Moore had also had a role in Personal Best.", "title": "Arts and culture" }, { "paragraph_id": 70, "text": "Stealing Time, a 2003 independent film, was partially filmed in Eugene. When the film premiered in June 2001 at the Seattle International Film Festival, it was titled Rennie's Landing after a popular bar near the University of Oregon campus. The title was changed for its DVD release. Zerophilia was filmed in Eugene in 2006.", "title": "Arts and culture" }, { "paragraph_id": 71, "text": "The 2016 Tracktown was about a distance runner training for the Olympics in Eugene.", "title": "Arts and culture" }, { "paragraph_id": 72, "text": "Religious institutions of higher learning in Eugene include Bushnell University and New Hope Christian College. Bushnell University (formerly Northwest Christian University), founded in 1895, has ties with the Christian Church (Disciples of Christ). New Hope Christian College (formerly Eugene Bible College) originated with the Bible Standard Conference in 1915, which joined with Open Bible Evangelistic Association to create Open Bible Standard Churches in 1932. Eugene Bible College was started from this movement by Fred Hornshuh in 1925.", "title": "Arts and culture" }, { "paragraph_id": 73, "text": "There are two Eastern Orthodox Church parishes in Eugene: St John the Wonderworker Orthodox Christian Church in the Historic Whiteaker Neighborhood and Saint George Greek Orthodox Church.", "title": "Arts and culture" }, { "paragraph_id": 74, "text": "There are six Roman Catholic parishes in Eugene as well: St. Mary Catholic Church, St. Jude Catholic Church, St. Mark Catholic Church, St. Peter Catholic Church, St. Paul Catholic Church, and St. Thomas More Catholic Church.", "title": "Arts and culture" }, { "paragraph_id": 75, "text": "Eugene also has a Ukrainian Catholic Church named Nativity of the Mother of God.", "title": "Arts and culture" }, { "paragraph_id": 76, "text": "There is a mainline Protestant contingency in the city as well—such as the largest of the Lutheran Churches, Central Lutheran near the U of O Campus and the Episcopal Church of the Resurrection.", "title": "Arts and culture" }, { "paragraph_id": 77, "text": "The Eugene area has a sizeable LDS Church presence, with three stakes, consisting of 23 congregations (wards and branches). The Church of Jesus Christ announced plans in April 2020 to build a temple in Eugene.", "title": "Arts and culture" }, { "paragraph_id": 78, "text": "The greater Eugene-Springfield area also has a Jehovah's Witnesses presence with five Kingdom Halls, several having multiple congregations in one Kingdom Hall.", "title": "Arts and culture" }, { "paragraph_id": 79, "text": "The Reconstructionist Temple Beth Israel is Eugene's largest Jewish congregation. It was also, for many decades, Eugene's only synagogue, until Orthodox members broke away in 1992 and formed \"Congregation Ahavas Torah\".", "title": "Arts and culture" }, { "paragraph_id": 80, "text": "Eugene has a community of some 140 Sikhs, who have established a Sikh temple.", "title": "Arts and culture" }, { "paragraph_id": 81, "text": "The 340-member congregation of the Unitarian Universalist Church in Eugene (UUCE) purchased the former Eugene Scottish Rite Temple in May 2010, renovated it, and began services there in September 2012.", "title": "Arts and culture" }, { "paragraph_id": 82, "text": "Saraha Nyingma Buddhist Temple in Eugene opened in 2012 in the former site of the Unitarian Universalist Church.", "title": "Arts and culture" }, { "paragraph_id": 83, "text": "The First Congregational Church, UCC is a large progressive Christian Church with a long history of justice focused ministries and a very active membership. Three years ago, the congregation coordinated with the Connections Program of the St Vincent DePaul organization to provide transitional homes for two unhoused families on the church's property. Through life - skills support and training and a more stable housing situation these families are then able to make their way into independent living.", "title": "Arts and culture" }, { "paragraph_id": 84, "text": "Eugene markets itself as \"Track Town USA\". There are close links between the University of Oregon's successful track & field program, the Oregon Track Club, and Nike, Inc, who were founded by University of Oregon track athlete Phil Knight and his coach, Bill Bowerman.", "title": "Sports" }, { "paragraph_id": 85, "text": "Eugene's miles of running trails, through its unusually large park system, are among the most extensive in the U.S. Notable trails include Pre's Trail in Alton Baker Park, Rexius Trail, the Adidas Oregon Trail, and the Ridgeline Trail. There is also an extensive network of trails along the Willamette River that reaches into neighboring Springfield, as well as along Amazon Creek in the southern and western parts of town.", "title": "Sports" }, { "paragraph_id": 86, "text": "Jogging was introduced to the U.S. through Eugene, brought from New Zealand by Bill Bowerman, who wrote the best-selling book \"Jogging\", and coached the champion University of Oregon track and cross country teams. During Bowerman's tenure, his \"Men of Oregon\" won 24 individual NCAA titles, including titles in 15 out of the 19 events contested. During Bowerman's 24 years at Oregon, his track teams finished in the top ten at the NCAA championships 16 times, including four team titles (1962, '64, '65, '70), and two second-place trophies. His teams also posted a dual meet record of 114–20.", "title": "Sports" }, { "paragraph_id": 87, "text": "Bowerman also invented the waffle sole for running shoes in Eugene, and with Oregon alumnus Phil Knight founded shoe giant Nike. The city has dozens of running clubs. The climate is cool and temperate, good both for jogging and record-setting. Eugene is home to the University of Oregon's Hayward Field track, which hosts numerous collegiate and amateur track and field meets throughout the year, most notably the Prefontaine Classic. Hayward Field was host to the 2004 AAU Junior Olympic Games, the 1989 World Masters Athletics Championships, the track and field events of the 1998 World Masters Games, the 2006 Pacific-10 track and field championships, the 1971, 1975, 1986, 1993, 1999, 2001, 2009, and 2011 USA Track & Field Outdoor Championships and the 1972, 1976, 1980, 2008, 2012, and 2016 U.S. Olympic trials. Eugene is the host of the delayed 2021 World Athletics Championships. The city bid for the 2019 event but lost narrowly to Doha, Qatar.", "title": "Sports" }, { "paragraph_id": 88, "text": "Eugene's Oregon Ducks are part of the Pac-12 Conference (Pac-12). American football is especially popular, with intense rivalries between the Ducks and both the Oregon State University Beavers and the University of Washington Huskies. Autzen Stadium is home to Duck football, with a seating capacity of 54,000 but has had over 60,000 with standing room only. The basketball arena, McArthur Court, was built in 1926. The arena was replaced by the Matthew Knight Arena in late 2010.", "title": "Sports" }, { "paragraph_id": 89, "text": "The Nationwide Tour's golfing event Oregon Classic takes place at Shadow Hills Country Club, just north of Eugene. The event has been played every year since 1998, except in 2001 when it was slated to begin the day after the 9/11 terrorist attacks. The top 20 players from the Nationwide Tour are promoted to the PGA Tour for the following year.", "title": "Sports" }, { "paragraph_id": 90, "text": "Eugene is also home to the Eugene Emeralds, a short-season Class A minor-league baseball team. The \"Ems\" play their home games in PK Park, also the home of the University of Oregon baseball team. The Eugene Jr. Generals, a Tier III Junior \"A\" ice hockey team belonging to the Northern Pacific Hockey League (NPHL) consisting of 8 teams throughout Oregon and Washington, plays at the Lane County Ice Center. Lane United FC, a soccer club that participates in the Northwest Division of USL League Two, was founded in 2013 and plays its home games at Civic Park.", "title": "Sports" }, { "paragraph_id": 91, "text": "The following table lists some sports clubs in Eugene and their usual home venue:", "title": "Sports" }, { "paragraph_id": 92, "text": "Spencer Butte Park at the southern edge of town provides access to Spencer Butte, a dominant feature of Eugene's skyline. Hendricks Park, situated on a knoll to the east of downtown, is known for its rhododendron garden and nearby memorial to Steve Prefontaine, known as Pre's Rock, where the legendary University of Oregon runner was killed in an auto accident. Alton Baker Park, next to the Willamette River, contains Pre's Trail. Also next to the Willamette are Skinner Butte Park and the Owen Memorial Rose Garden, which contains more than 4,500 roses of over 400 varieties, as well as the 150-year-old Black Tartarian Cherry tree, an Oregon Heritage Tree.", "title": "Parks and recreation" }, { "paragraph_id": 93, "text": "The city of Eugene maintains an urban forest. The University of Oregon campus is an arboretum, with over 500 species of trees. The city operates and maintains scenic hiking trails that pass through and across the ridges of a cluster of hills in the southern portion of the city, on the fringe of residential neighborhoods. Some trails allow biking, and others are for hikers and runners only.", "title": "Parks and recreation" }, { "paragraph_id": 94, "text": "The nearest ski resort, Willamette Pass, is one hour from Eugene by car. On the way, along Oregon Route 58, are several reservoirs and lakes, the Oakridge mountain bike trails, hot springs, and waterfalls within Willamette National Forest. Eugene residents also frequent the Hoodoo and Mount Bachelor ski resorts. The Three Sisters Wilderness, the Oregon Dunes National Recreation Area, and Smith Rock are just a short drive away.", "title": "Parks and recreation" }, { "paragraph_id": 95, "text": "In 1944, Eugene adopted a council–manager form of government, replacing the day-to-day management of city affairs by the part-time mayor and volunteer city council with a full-time professional city manager. The subsequent history of Eugene city government has largely been one of the dynamics—often contentious—between the city manager, the mayor and city council.", "title": "Government" }, { "paragraph_id": 96, "text": "According to statute, all Eugene and Lane County elections are officially non-partisan, with a primary containing all candidates in May. If a candidate gets more than 50% of the vote in the primary, they win the election outright, otherwise the top two candidates face off in a November runoff. This allows candidates to win seats during the lower-turnout primary election.", "title": "Government" }, { "paragraph_id": 97, "text": "The mayor of Eugene is Lucy Vinis, who has been in office since winning the popular vote in May 2016, and who was re-elected in May 2020. Recent mayors include Edwin Cone (1958–69), Les Anderson (1969–77) Gus Keller (1977–84), Brian Obie (1985–88), Jeff Miller (1989–92), Ruth Bascom (1993–96), Jim Torrey (1997–2004) and Kitty Piercy (2005-2017).", "title": "Government" }, { "paragraph_id": 98, "text": "Mayor: Lucy Vinis", "title": "Government" }, { "paragraph_id": 99, "text": "The Eugene Police Department is the city's law enforcement and public safety agency. The Lane County Sheriff's Office also has its headquarters in Eugene.", "title": "Government" }, { "paragraph_id": 100, "text": "The University of Oregon is served by the University of Oregon Police Department, and Eugene Police Department also has a police station in the West University District near campus. Lane Community College is served by the Lane Community College Public Safety Department. The Oregon State Police have a presence in the rural areas and highways around the Eugene metro area. The LTD downtown station, and the EmX lines are patrolled by LTD Transit Officers. Since 1989 the mental health crisis intervention non-governmental agency CAHOOTS has responded to Eugene's mental health 911 calls.", "title": "Government" }, { "paragraph_id": 101, "text": "Eugene-Springfield Fire Department is the agency responsible for emergency medical services, fire suppression, HAZMAT operations and water/Confined spaces rescues in the combined Eugene-Springfield metropolitan area.", "title": "Government" }, { "paragraph_id": 102, "text": "Eugene used to have an ordinance which prohibited car horn usage for non-driving purposes. After several residents were cited for this offense during the anti-Gulf War demonstrations in January 1991, the city was taken to court and in 1992 the Oregon Court of Appeals overturned the ordinance, finding it unconstitutionally vague. Eugene City Hall was abandoned in 2012 for reasons of structural integrity, energy efficiency, and obsolete size. Various offices of city government became tenants in eight other buildings.", "title": "Government" }, { "paragraph_id": 103, "text": "Being the largest city by far in Lane County, Eugene's voters almost always decide the county's partisan tilt. While Eugene has historically been a counter-culture-heavy and left-leaning college town, the county's partisan leanings have intensified in recent decades, mirroring the general polarization of Oregon voters along urban (pro-Democratic) and rural (pro-Republican) lines.", "title": "Government" }, { "paragraph_id": 104, "text": "Lane County voted for Bernie Sanders over eventual 2016 nominee Hillary Clinton by 60.6-38.1%, and Eugene offered Sanders an even larger share of its vote.", "title": "Government" }, { "paragraph_id": 105, "text": "Eugene is home to the University of Oregon. Other institutions of higher learning include Bushnell University, Lane Community College, New Hope Christian College, Gutenberg College, and Pacific University's Eugene campus.", "title": "Education" }, { "paragraph_id": 106, "text": "The Eugene School District includes four full-service high schools (Churchill, North Eugene, Sheldon, and South Eugene) and many alternative education programs, such as international schools and charter schools. Foreign language immersion programs in the district are available in Spanish, French, Chinese, and Japanese.", "title": "Education" }, { "paragraph_id": 107, "text": "The Bethel School District serves children in the Bethel neighborhood on the northwest edge of Eugene. The district is home to the traditional Willamette High School and the alternative Kalapuya High School. There are 11 schools in this district.", "title": "Education" }, { "paragraph_id": 108, "text": "Eugene also has several private schools, including the Eugene Waldorf School, the Outdoor High School, Eugene Montessori, Far Horizon Montessori, Eugene Sudbury School, Wellsprings Friends School, Oak Hill School, and The Little French School.", "title": "Education" }, { "paragraph_id": 109, "text": "Parochial schools in Eugene include Marist Catholic High School, O'Hara Catholic Elementary School, Eugene Christian School, and St. Paul Parish School.", "title": "Education" }, { "paragraph_id": 110, "text": "The largest library in Oregon is the University of Oregon's Knight Library, with collections totaling more than 3 million volumes and over 100,000 audio and video items. The Eugene Public Library moved into a new, larger building downtown in 2002. The four-story library is an increase from 38,000 to 130,000 square feet (3,500 to 12,100 m). There are also two branches of the Eugene Public Library, the Sheldon Branch Library in the neighborhood of Cal Young/Sheldon, and the Bethel Branch Library, in the neighborhood of Bethel. Eugene also has the Lane County Law Library.", "title": "Education" }, { "paragraph_id": 111, "text": "The largest newspaper serving the area is The Register-Guard, a daily newspaper with a circulation of about 70,000, published independently by the Baker family of Eugene until 2018, before being acquired by GateHouse Media, (now owned by Gannett Company). Other newspapers serving the area include the Eugene Weekly, the Emerald, the student-run independent newspaper at the University of Oregon, now published on Mondays and Thursdays;The Torch, the student-run newspaper at Lane Community College, the Ignite, the newspaper at New Hope Christian College and The Beacon Bolt, the student-run newspaper at Bushnell University. Eugene Magazine, Lifestyle Quarterly, Eugene Living, and Sustainable Home and Garden magazines also serve the area. Adelante Latino is a Spanish language newspaper in Eugene that serves all of Lane County.", "title": "Media" }, { "paragraph_id": 112, "text": "Local television stations include KMTR (NBC), KVAL (CBS), KLSR-TV (Fox), KEVU-CD, KEZI (ABC), KEPB (PBS), and KTVC (independent).", "title": "Media" }, { "paragraph_id": 113, "text": "The local NPR affiliates are KOPB, and KLCC. Radio station KRVM-AM is an affiliate of Jefferson Public Radio, based at Southern Oregon University. The Pacifica Radio affiliate is the University of Oregon student-run radio station, KWVA. Additionally, the community supports two other radio stations: KWAX (classical) and KRVM-FM (alternative).", "title": "Media" }, { "paragraph_id": 114, "text": "AM stations", "title": "Media" }, { "paragraph_id": 115, "text": "FM stations", "title": "Media" }, { "paragraph_id": 116, "text": "Lane Transit District (LTD), a public transportation agency formed in 1970, covers 240 square miles (620 km) of Lane County, including Creswell, Cottage Grove, Junction City, Veneta, and Blue River. Operating more than 90 buses during peak hours, LTD carries riders on 3.7 million trips every year. LTD also operates a bus rapid transit line that runs between Eugene and Springfield—Emerald Express (EmX)—much of which runs in its own lane, with stations providing for off-board fare payment. LTD's main terminus in Eugene is at the Eugene Station. LTD also offers paratransit.", "title": "Infrastructure" }, { "paragraph_id": 117, "text": "Greyhound Lines provides service between Los Angeles and Portland on the I-5 corridor.", "title": "Infrastructure" }, { "paragraph_id": 118, "text": "Cycling is popular in Eugene and many people commute via bicycle. Summertime events and festivals frequently have valet bicycle parking corrals that are often filled to capacity by three hundred or more bikes. Many people commute to work by bicycle every month of the year. PeaceHealth Rides, a bike share system formerly operated by Uber subsidiary JUMP, and currently operated by non-profit Cascadia Mobility, offers 300 city-owned bicycles available to the public for a small fee. Bike trails take commuting and recreational bikers along the Willamette River past a scenic rose garden, along Amazon Creek, through the downtown, and through the University of Oregon campus. Eugene is close to many popular mountain bike trails, and Disciples of Dirt is the local mountain bike club that organizes group rides and promotes trail stewardship.", "title": "Infrastructure" }, { "paragraph_id": 119, "text": "In 2009, the League of American Bicyclists cited Eugene as 1 of 10 \"Gold-level\" cities in the U.S. because of its \"remarkable commitments to bicycling.\" In 2010, Bicycling magazine named Eugene the 5th most bike-friendly city in America. The U.S. Census Bureau's annual American Community Survey reported that Eugene had a bicycle commuting mode share of 7.3% in 2011, the fifth highest percentage nationwide among U.S. cities with 65,000 people or more, and 13 times higher than the national average of 0.56%.", "title": "Infrastructure" }, { "paragraph_id": 120, "text": "The 1908 Amtrak depot downtown was restored in 2004; it is the southern terminus for two daily runs of the Amtrak Cascades, and a stop along the route in each direction for the daily Coast Starlight.", "title": "Infrastructure" }, { "paragraph_id": 121, "text": "Air travel is served by the Eugene Airport, also known as Mahlon Sweet Field, which is the fifth largest airport in the Northwest and second largest airport in Oregon. The Eugene Metro area also has numerous private airports. The Eugene Metro area also has several heliports, such as the Sacred Heart Medical Center Heliport and Mahlon Sweet Field Heliport, and many single helipads.", "title": "Infrastructure" }, { "paragraph_id": 122, "text": "Highways traveling within and through Eugene include:", "title": "Infrastructure" }, { "paragraph_id": 123, "text": "Eugene is the home of Oregon's largest publicly owned water and power utility, the Eugene Water & Electric Board (EWEB). EWEB got its start in the first decade of the 20th century, after an epidemic of typhoid found in the groundwater supply. The City of Eugene condemned Eugene's private water utility and began treating river water (first the Willamette; later the McKenzie) for domestic use. EWEB got into the electric business when power was needed for the water pumps. Excess electricity generated by the EWEB's hydropower plants was used for street lighting.", "title": "Infrastructure" }, { "paragraph_id": 124, "text": "Natural gas service is provided by NW Natural.", "title": "Infrastructure" }, { "paragraph_id": 125, "text": "Wastewater treatment services are provided by the Metropolitan Wastewater Management Commission, a partnership between the Cities of Eugene and Springfield and Lane County.", "title": "Infrastructure" }, { "paragraph_id": 126, "text": "Three hospitals serve the Eugene-Springfield area. Sacred Heart Medical Center University District is the only one within Eugene city limits. McKenzie-Willamette Medical Center and Sacred Heart Medical Center at RiverBend are in Springfield. Oregon Medical Group, a primary care based multi-specialty group, operates several clinics in Eugene, as does PeaceHealth Medical Group. White Bird Clinic provides a broad range of health and human services, including low-cost clinics. The Volunteers in Medicine & Occupy Medical clinics provide free medical and mental care to low-income adults without health insurance.", "title": "Infrastructure" }, { "paragraph_id": 127, "text": "Eugene is one of the few municipalities in the US that does not fluoridate its water supply.", "title": "Infrastructure" }, { "paragraph_id": 128, "text": "Eugene has four sister cities:", "title": "Sister cities" } ]
Eugene is a city in and the county seat of Lane County, Oregon, United States. It is located at the southern end of the Willamette Valley, near the confluence of the McKenzie and Willamette rivers, about 50 miles (80 km) east of the Oregon Coast. The second-most populous city in Oregon, Eugene had a population of 176,654 as of the 2020 United States census and it covers city area of 44.21 sq mi (114.5 km2). The Eugene-Springfield metropolitan statistical area is the second largest in Oregon behind Portland. In 2022, Eugene's population was estimated to have reached 179,887. Eugene is home to the University of Oregon, Bushnell University, and Lane Community College. The city is noted for its natural environment, recreational opportunities, and focus on the arts, along with its history of civil unrest, protests, and green activism. Eugene's official slogan is "A Great City for the Arts and Outdoors". It is also referred to as the "Emerald City" and as "Track Town, USA". The Nike corporation had its beginnings in Eugene. In July 2022, the city hosted the 18th World Athletics Championship.
2001-07-29T03:57:38Z
2023-12-25T03:30:05Z
[ "Template:Div col", "Template:Div col end", "Template:Cite journal", "Template:Oregon county seats", "Template:Use American English", "Template:Flagicon", "Template:Reflist", "Template:Cite web", "Template:Dead link", "Template:Infobox settlement", "Template:Cite EB1911", "Template:Lane County, Oregon", "Template:Oregon", "Template:Authority control", "Template:Cite book", "Template:Use mdy dates", "Template:Respell", "Template:Weather box", "Template:US Census population", "Template:Citation needed", "Template:Convert", "Template:Main", "Template:Clear", "Template:Commons category", "Template:IPAc-en", "Template:Cite news", "Template:Wikivoyage", "Template:Notelist", "Template:Cite magazine", "Template:Oregon cities and mayors of 100,000 population", "Template:Portal-inline", "Template:Webarchive", "Template:Cite episode" ]
https://en.wikipedia.org/wiki/Eugene,_Oregon
9,627
Elizabeth Barrett Browning
Elizabeth Barrett Browning (née Moulton-Barrett; 6 March 1806 – 29 June 1861) was an English poet of the Victorian era, popular in Britain and the United States during her lifetime and frequently anthologised after her death; her work received renewed attention following the feminist scholarship of the 1970s and 1980s, and greater recognition of women writers in English. Born in County Durham, the eldest of 12 children, Elizabeth Barrett wrote poetry from the age of eleven. Her mother's collection of her poems forms one of the largest extant collections of juvenilia by any English writer. At 15, she became ill, suffering intense head and spinal pain for the rest of her life. Later in life, she also developed lung problems, possibly tuberculosis. She took laudanum for the pain from an early age, which is likely to have contributed to her frail health. In the 1840s, Elizabeth was introduced to literary society through her distant cousin and patron John Kenyon. Her first adult collection of poems was published in 1838, and she wrote prolifically from 1841 to 1844, producing poetry, translation, and prose. She campaigned for the abolition of slavery, and her work helped influence reform in child labour legislation. Her prolific output made her a rival to Tennyson as a candidate for poet laureate on the death of Wordsworth. Elizabeth's volume Poems (1844) brought her great success, attracting the admiration of the writer Robert Browning. Their correspondence, courtship, and marriage were carried out in secret, for fear of her father's disapproval. Following the wedding, she was indeed disinherited by her father. In 1846, the couple moved to Italy, where she lived for the rest of her life. Elizabeth died in Florence in 1861. A collection of her later poems were published by her husband shortly after her death. They had a son, known as "Pen" (Robert Barrett, 1849–1912). Pen devoted himself to painting until his eyesight began to fail later in life. He also built a large collection of manuscripts and memorabilia of his parents, but because he died intestate, it was sold by public auction to various bidders and then scattered upon his death. The Armstrong Browning Library has recovered some of his collection, and it now houses the world's largest collection of Browning memorabilia. Elizabeth's work had a major influence on prominent writers of the day, including the American poets Edgar Allan Poe and Emily Dickinson. She is remembered for such poems as "How Do I Love Thee?" (Sonnet 43, 1845) and Aurora Leigh (1856). Some of Elizabeth Barrett's family had lived in Jamaica since 1655. Their wealth derived mainly from slave labour from their plantations in the Caribbean. Edward Barrett (1734–1798) was owner of 10,000 acres (40 km) in the estates of Cinnamon Hill, Cornwall, Cambridge, and Oxford in northern Jamaica. Elizabeth's maternal grandfather owned sugar plantations farmed by slaves they bought from Africa, mills, glassworks, and ships that traded between Jamaica and Newcastle in the United Kingdom. The family wished to hand down their name, stipulating that Barrett always should be held as a surname. In some cases, inheritance was given on condition that the name was used by the beneficiary; the English gentry and "squirearchy" had long encouraged this sort of name changing. Given this strong tradition, Elizabeth used "Elizabeth Barrett Moulton Barrett" on legal documents, and before she was married, she often signed herself "Elizabeth Barrett Barrett" or "EBB" (initials which she was able to keep after her wedding). Elizabeth's father chose to raise his family in England, and his business enterprises remained in Jamaica. Elizabeth's mother, Mary Graham Clarke, also owned plantations farmed by enslaved people in the British West Indies. Elizabeth Barrett Moulton-Barrett was born on (it is supposed) 6 March 1806 in Coxhoe Hall, between the villages of Coxhoe and Kelloe in County Durham, England. Her parents were Edward Barrett Moulton-Barrett and Mary Graham Clarke. However, it has been suggested that, when she was christened on 9 March, she was already three or four months old, and that this was concealed because her parents had married only on 14 May 1805. Although she had already been baptised by a family friend in that first week of her life, she was baptised again, more publicly, on 10 February 1808 at Kelloe parish church, at the same time as her younger brother, Edward (known as Bro). He had been born in June 1807, only 15 months after Elizabeth's stated date of birth. A private christening might seem unlikely for a family of standing, and while Bro's birth was celebrated with a holiday on the family's Caribbean plantations, Elizabeth's was not. Elizabeth was the eldest of 12 children (eight boys and four girls). Eleven lived to adulthood; one daughter died at the age of 3, when Elizabeth was 8. The children all had nicknames: Elizabeth was Ba. She rode her pony, went for family walks and picnics, socialised with other county families, and participated in home theatrical productions. Unlike her siblings, she immersed herself in books as often as she could get away from the social rituals of her family. In 1809, the family moved to Hope End, a 500-acre (200 ha) estate near the Malvern Hills in Ledbury, Herefordshire. Her father converted the Georgian house into stables and built a mansion of opulent Turkish design, which his wife described as something from the Arabian Nights' Entertainments. The interior's brass balustrades, mahogany doors inlaid with mother-of-pearl, and finely carved fireplaces were eventually complemented by lavish landscaping: ponds, grottos, kiosks, an ice house, a hothouse, and a subterranean passage from house to gardens. Her time at Hope End inspired her in later life to write Aurora Leigh (1856), her most ambitious work, which went through more than 20 editions by 1900, but none from 1905 to 1978. She was educated at home and tutored by Daniel McSwiney with her oldest brother. She began writing verses at the age of four. During the Hope End period, she was an intensely studious, precocious child. She claimed that she was reading novels at age 6, having been entranced by Pope's translations of Homer at age 8, studying Greek at age 10, and writing her own Homeric epic The Battle of Marathon: A Poem at age 11. In 1820, Mr Barrett privately published The Battle of Marathon, an epic-style poem, but all copies remained within the family. Her mother compiled the child's poetry into collections of "Poems by Elizabeth B. Barrett". Her father called her the "Poet Laureate of Hope End" and encouraged her work. The result is one of the larger collections of juvenilia of any English writer. Mary Russell Mitford described the young Elizabeth at this time as having "a slight, delicate figure, with a shower of dark curls falling on each side of a most expressive face; large, tender eyes, richly fringed by dark eyelashes, and a smile like a sunbeam." At about this time, Elizabeth began to battle an illness, which the medical science of the time was unable to diagnose. All three sisters came down with the syndrome, but it lasted only with Elizabeth. She had intense head and spinal pain with loss of mobility. Various biographies link this to a riding accident at the time (she fell while trying to dismount a horse), but there is no evidence to support the link. Sent to recover at the Gloucester spa, she was treated – in the absence of symptoms supporting another diagnosis – for a spinal problem. This illness continued for the rest of her life, and it is believed to be unrelated to the lung disease which she developed in 1837. She began to take opiates for the pain, laudanum (an opium concoction) followed by morphine, then commonly prescribed. She became dependent on them for much of her adulthood; the use from an early age may well have contributed to her frail health. Biographers such as Alethea Hayter have suggested this dependency have contributed to the wild vividness of her imagination and the poetry that it produced. By 1821, she had read Mary Wollstonecraft's A Vindication of the Rights of Woman (1792), and she become a passionate supporter of Wollstonecraft's political ideas. The child's intellectual fascination with the classics and metaphysics was reflected in a religious intensity which she later described as "not the deep persuasion of the mild Christian but the wild visions of an enthusiast." The Barretts attended services at the nearest Dissenting chapel, and Edward was active in Bible and missionary societies. Elizabeth's mother died in 1828, and she is buried at St Michael's Church, Ledbury, next to her daughter Mary. Sarah Graham-Clarke, Elizabeth's aunt, helped to care for the children, and she had clashes with Elizabeth's strong will. In 1831, Elizabeth's grandmother, Elizabeth Moulton, died. Following lawsuits and the abolition of slavery, Mr Barrett incurred great financial and investment losses that forced him to sell Hope End. Although the family was never poor, the place was seized and sold to satisfy creditors. Always secret in his financial dealings, he would not discuss his situation, and the family was haunted by the idea that they might have to move to Jamaica. From 1833 to 1835, she was living with her family at Belle Vue in Sidmouth. The site has now been renamed Cedar Shade and redeveloped. A blue plaque at the entrance to the site attests to its previous existence. In 1838, some years after the sale of Hope End, the family settled at 50 Wimpole Street, Marylebone, London. During 1837–1838, the poet was struck with illness again, with symptoms today suggesting tuberculous ulceration of the lungs. The same year, at her physician's insistence, she moved from London to Torquay on the Devonshire coast. Her former home now forms part of the Regina Hotel. Two tragedies then struck. In February 1840, her brother Samuel died of a fever in Jamaica, then her favourite brother Edward (Bro) was drowned in a sailing accident in Torquay in July. These events had a serious effect on her already fragile health. She felt guilty as her father had disapproved of Edward's trip to Torquay. She wrote to Mitford: "That was a very near escape from madness, absolute hopeless madness". The family returned to Wimpole Street in 1841. At Wimpole Street, Elizabeth spent most of her time in her upstairs room. Her health began to improve, but she saw few people other than her immediate family. One of those was John Kenyon, a wealthy friend and distant cousin of the family and patron of the arts. She received comfort from a spaniel named Flush, a gift from Mary Mitford. (Virginia Woolf later fictionalised the life of the dog, making him the protagonist of her 1933 novel Flush: A Biography). From 1841 to 1844, Elizabeth was prolific in poetry, translation, and prose. The poem The Cry of the Children, published in 1842 in Blackwood's, condemned child labour and helped bring about child-labour reforms by raising support for Lord Shaftesbury's Ten Hours Bill (1844). About the same time, she contributed critical prose pieces to Richard Henry Horne's A New Spirit of the Age, including a laudatory essay on Thomas Carlyle. In 1844, she published the two-volume Poems, which included "A Drama of Exile", "A Vision of Poets", and "Lady Geraldine's Courtship", and two substantial critical essays for 1842 issues of The Athenaeum. A self-proclaimed "adorer of Carlyle", she sent a copy to him as "a tribute of admiration & respect", which began a correspondence between them. "Since she was not burdened with any domestic duties expected of her sisters, Barrett Browning could now devote herself entirely to the life of the mind, cultivating an enormous correspondence, reading widely". Her prolific output made her a rival to Tennyson as a candidate for poet laureate in 1850 on the death of Wordsworth. A Royal Society of Arts blue plaque now commemorates Elizabeth at 50 Wimpole Street. Her 1844 volume Poems made her one of the more popular writers in the country and inspired Robert Browning to write to her. He wrote "I love your verses with all my heart, dear Miss Barrett," praising their "fresh strange music, the affluent language, the exquisite pathos and true new brave thought." Kenyon arranged for Browning to meet Elizabeth on 20 May 1845, in her rooms, and so began one of the most famous courtships in literature. Elizabeth had produced a large amount of work, but Browning had a great influence on her subsequent writing as did she on his: Two of Barrett's most famous pieces were written after she met Browning, Sonnets from the Portuguese and Aurora Leigh. Robert's Men and Women is also a product of that time. Some critics state that her activity was, in some ways, in decay before she met Browning: "Until her relationship with Robert Browning began in 1845, Barrett's willingness to engage in public discourse about social issues and about aesthetic issues in poetry, which had been so strong in her youth, gradually diminished, as did her physical health. As an intellectual presence and a physical being, she was becoming a shadow of herself." The courtship and marriage between Robert Browning and Elizabeth were made secretly as she knew her father would disapprove. After a private marriage at St Marylebone Parish Church, they honeymooned in Paris and then moved to Italy in September 1846, which became their home almost continuously until her death. Elizabeth's loyal lady's maid Elizabeth Wilson witnessed the marriage and accompanied the couple to Italy. Mr Barrett disinherited Elizabeth as he did each of his children who married. Elizabeth had foreseen her father's anger but had not anticipated her brothers' rejection. As Elizabeth had some money of her own, the couple were reasonably comfortable in Italy. The Brownings were well respected and even famous. Elizabeth grew stronger, and in 1849, at the age of 43, between four miscarriages, she gave birth to a son, Robert Wiedeman Barrett Browning, whom they called Pen. Their son later married, but had no legitimate children. At her husband's insistence, Elizabeth's second edition of Poems included her love sonnets; as a result, her popularity increased (as did critical regard), and her artistic position was confirmed. During the years of her marriage, her literary reputation far surpassed that of her poet-husband; when visitors came to their home in Florence, she was invariably the greater attraction. The couple came to know a wide circle of artists and writers, including William Makepeace Thackeray, sculptor Harriet Hosmer (who, she wrote, seemed to be the "perfectly emancipated female") and Harriet Beecher Stowe. In 1849, she met Margaret Fuller; Carlyle in 1851; French novelist George Sand in 1852, whom she had long admired. Among her intimate friends in Florence was the writer Isa Blagden, whom she encouraged to write novels. They met Alfred Tennyson in Paris, and John Forster, Samuel Rogers and the Carlyles in London, later befriending Charles Kingsley and John Ruskin. After the death of an old friend, G. B. Hunter, and then of her father, Barrett Browning's health started to deteriorate. The Brownings moved from Florence to Siena, residing at the Villa Alberti. Engrossed in Italian politics, she issued a small volume of political poems titled Poems before Congress (1860) "most of which were written to express her sympathy with the Italian cause after the outbreak of fighting in 1859". They caused a furore in Britain, and the conservative magazines Blackwood's and the Saturday Review labelled her a fanatic. She dedicated this book to her husband. Her last work was A Musical Instrument, published posthumously. Barrett Browning's sister Henrietta died in November 1860. The couple spent the winter of 1860–1861 in Rome where Barrett Browning's health deteriorated, and they returned to Florence in early June 1861. She became gradually weaker, using morphine to ease her pain. She died on 29 June 1861 in her husband's arms. Browning said that she died "smilingly, happily, and with a face like a girl's...Her last word was...'Beautiful' ". She was buried in the Protestant English Cemetery of Florence. "On Monday July 1 the shops in the area around Casa Guidi were closed, while Elizabeth was mourned with unusual demonstrations." The nature of her illness is still unclear. Some modern scientists speculate her illness may have been hypokalemic periodic paralysis, a genetic disorder that causes weakness and many of the other symptoms she described. Barrett Browning's first known poem "On the Cruelty of Forcement to Man" was written at the age of 6 or 8. The manuscript, which protests against impressment, is currently in the Berg Collection of the New York Public Library; the exact date is controversial because the "2" in the date 1812 is written over something else that is scratched out. Her first independent publication was "Stanzas Excited by Reflections on the Present State of Greece" in The New Monthly Magazine of May 1821; followed two months later by "Thoughts Awakened by Contemplating a Piece of the Palm which Grows on the Summit of the Acropolis at Athens". Her first collection of poems, An Essay on Mind, with Other Poems, was published in 1826 and reflected her passion for Byron and Greek politics. Its publication drew the attention of Hugh Stuart Boyd, a blind scholar of the Greek language, and of Uvedale Price, another Greek scholar, with whom she maintained sustained correspondence. Among other neighbours was Mrs James Martin from Colwall, with whom she corresponded throughout her life. Later, at Boyd's suggestion, she translated Aeschylus' Prometheus Bound (published in 1833; retranslated in 1850). During their friendship, Barrett studied Greek literature, including Homer, Pindar and Aristophanes. Elizabeth opposed slavery and published two poems highlighting the barbarity of the institution and her support for the abolitionist cause: "The Runaway Slave at Pilgrim's Point" and "A Curse for a Nation". The first depicts an enslaved woman whipped, raped, and made pregnant cursing her enslavers. Elizabeth declared herself glad that the slaves were "virtually free" when the Slavery Abolition Act passed in the British Parliament despite the fact that her father believed that abolition would ruin his business. The date of publication of these poems is in dispute, but her position on slavery in the poems is clear and may have led to a rift between Elizabeth and her father. She wrote to John Ruskin in 1855 "I belong to a family of West Indian slaveholders, and if I believed in curses, I should be afraid". Her father and uncle were unaffected by the Baptist War (1831–1832) and continued to own slaves until passage of the Slavery Abolition Act. In London, John Kenyon introduced Elizabeth to literary figures including William Wordsworth, Mary Russell Mitford, Samuel Taylor Coleridge, Alfred Tennyson and Thomas Carlyle. Elizabeth continued to write, contributing "The Romaunt of Margaret", "The Romaunt of the Page", "The Poet's Vow" and other pieces to various periodicals. She corresponded with other writers, including Mary Russell Mitford, who became a close friend and who supported Elizabeth's literary ambitions. In 1838 The Seraphim and Other Poems appeared, the first volume of Elizabeth's mature poetry to appear under her own name. Sonnets from the Portuguese was published in 1850. There is debate about the origin of the title. Some say it refers to the series of sonnets of the 16th-century Portuguese poet Luís de Camões. However, "my little Portuguese" was a pet name that Browning had adopted for Elizabeth and this may have some connection. The verse-novel Aurora Leigh, her most ambitious and perhaps the most popular of her longer poems, appeared in 1856. It is the story of a female writer making her way in life, balancing work and love, and based on Elizabeth's own experiences. Aurora Leigh was an important influence on Susan B. Anthony's thinking about the traditional roles of women, with regard to marriage versus independent individuality. The North American Review praised Elizabeth's poem: "Mrs. Browning's poems are, in all respects, the utterance of a woman — of a woman of great learning, rich experience, and powerful genius, uniting to her woman's nature the strength which is sometimes thought peculiar to a man." Much of Barrett Browning's work carries a religious theme. She had read and studied such works as Milton's Paradise Lost and Dante's Inferno. She says in her writing, "We want the sense of the saturation of Christ's blood upon the souls of our poets, that it may cry through them in answer to the ceaseless wail of the Sphinx of our humanity, expounding agony into renovation. Something of this has been perceived in art when its glory was at the fullest. Something of a yearning after this may be seen among the Greek Christian poets, something which would have been much with a stronger faculty". She believed that "Christ's religion is essentially poetry – poetry glorified". She explored the religious aspect in many of her poems, especially in her early work, such as the sonnets. She was interested in theological debate, had learned Hebrew and read the Hebrew Bible. Her seminal Aurora Leigh, for example, features religious imagery and allusion to the apocalypse. The critic Cynthia Scheinberg notes that female characters in Aurora Leigh and her earlier work "The Virgin Mary to the Child Jesus" allude to Miriam, sister and caregiver to Moses. These allusions to Miriam in both poems mirror the way in which Barrett Browning herself drew from Jewish history, while distancing herself from it, in order to maintain the cultural norms of a Christian woman poet of the Victorian Age. In the correspondence Barrett Browning kept with the Reverend William Merry from 1843 to 1844 on predestination and salvation by works, she identifies herself as a Congregationalist: "I am not a Baptist — but a Congregational Christian, — in the holding of my private opinions." In 1892, Ledbury, Herefordshire, held a design competition to build an Institute in honour of Barrett Browning. Brightwen Binyon beat 44 other designs. It was based on the timber-framed Market House, which was opposite the site, and was completed in 1896. However, Nikolaus Pevsner was not impressed by its style. It was used as a public library from 1938 to 2021, when new library facilities were provided for the town, and is now the headquarters of the Ledbury Poetry Festival. It has been Grade II-listed since 2007. How Do I Love Thee? How do I love thee? Let me count the ways. I love thee to the depth and breadth and height My soul can reach, when feeling out of sight For the ends of being and ideal grace. I love thee to the level of every day's Most quiet need, by sun and candle-light. I love thee freely, as men strive for right. I love thee purely, as they turn from praise. I love thee with the passion put to use In my old griefs, and with my childhood's faith. I love thee with a love I seemed to lose With my lost saints. I love thee with the breath, Smiles, tears, of all my life; and, if God choose, I shall but love thee better after death. Sonnet XLIII from Sonnets from the Portuguese, 1845 (published 1850) Barrett Browning was widely popular in the United Kingdom and the United States during her lifetime. Edgar Allan Poe was inspired by her poem Lady Geraldine's Courtship and specifically borrowed the poem's metre for his poem The Raven. Poe had reviewed Barrett Browning's work in the January 1845 issue of the Broadway Journal, writing that "her poetic inspiration is the highest – we can conceive of nothing more august. Her sense of Art is pure in itself." In return, she praised The Raven, and Poe dedicated his 1845 collection The Raven and Other Poems to her, referring to her as "the noblest of her sex". Barrett Browning's poetry greatly influenced Emily Dickinson, who admired her as a woman of achievement. Her popularity in the United States and Britain was advanced by her stands against social injustice, including slavery in the United States, injustice toward Italians from their foreign rulers, and child labour. Lilian Whiting published a biography of Barrett Browning (1899) which describes her as "the most philosophical poet" and depicts her life as "a Gospel of applied Christianity". To Whiting, the term "art for art's sake" did not apply to Barrett Browning's work, as each poem, distinctively purposeful, was borne of a more "honest vision". In this critical analysis, Whiting portrays Barrett Browning as a poet who uses knowledge of Classical literature with an "intuitive gift of spiritual divination". In Elizabeth Barrett Browning, Angela Leighton suggests that the portrayal of Barrett Browning as the "pious iconography of womanhood" has distracted us from her poetic achievements. Leighton cites the 1931 play by Rudolf Besier The Barretts of Wimpole Street as evidence that 20th-century literary criticism of Barrett Browning's work has suffered more as a result of her popularity than poetic ineptitude. The play was popularized by actress Katharine Cornell, for whom it became a signature role. It was an enormous success, both artistically and commercially, and was revived several times and adapted twice into movies. Sampson, however, considers the play to have been the most damaging cause of false myths about Elizabeth, and particularly the relationship with her, allegedly 'tyrannical', father. Throughout the 20th century, literary criticism of Barrett Browning's poetry remained sparse until her poems were discovered by the women's movement. She once described herself as being inclined to reject several women's rights principles, suggesting in letters to Mary Russell Mitford and her husband that she believed that there was an inferiority of intellect in women. In Aurora Leigh, however, she created a strong and independent woman who embraces both work and love. Leighton writes that because Elizabeth participates in the literary world, where voice and diction are dominated by perceived masculine superiority, she "is defined only in mysterious opposition to everything that distinguishes the male subject who writes..." A five-volume scholarly edition of her works was published in 2010, the first in over a century.
[ { "paragraph_id": 0, "text": "Elizabeth Barrett Browning (née Moulton-Barrett; 6 March 1806 – 29 June 1861) was an English poet of the Victorian era, popular in Britain and the United States during her lifetime and frequently anthologised after her death; her work received renewed attention following the feminist scholarship of the 1970s and 1980s, and greater recognition of women writers in English.", "title": "" }, { "paragraph_id": 1, "text": "Born in County Durham, the eldest of 12 children, Elizabeth Barrett wrote poetry from the age of eleven. Her mother's collection of her poems forms one of the largest extant collections of juvenilia by any English writer. At 15, she became ill, suffering intense head and spinal pain for the rest of her life. Later in life, she also developed lung problems, possibly tuberculosis. She took laudanum for the pain from an early age, which is likely to have contributed to her frail health.", "title": "" }, { "paragraph_id": 2, "text": "In the 1840s, Elizabeth was introduced to literary society through her distant cousin and patron John Kenyon. Her first adult collection of poems was published in 1838, and she wrote prolifically from 1841 to 1844, producing poetry, translation, and prose. She campaigned for the abolition of slavery, and her work helped influence reform in child labour legislation. Her prolific output made her a rival to Tennyson as a candidate for poet laureate on the death of Wordsworth.", "title": "" }, { "paragraph_id": 3, "text": "Elizabeth's volume Poems (1844) brought her great success, attracting the admiration of the writer Robert Browning. Their correspondence, courtship, and marriage were carried out in secret, for fear of her father's disapproval. Following the wedding, she was indeed disinherited by her father. In 1846, the couple moved to Italy, where she lived for the rest of her life. Elizabeth died in Florence in 1861. A collection of her later poems were published by her husband shortly after her death.", "title": "" }, { "paragraph_id": 4, "text": "They had a son, known as \"Pen\" (Robert Barrett, 1849–1912). Pen devoted himself to painting until his eyesight began to fail later in life. He also built a large collection of manuscripts and memorabilia of his parents, but because he died intestate, it was sold by public auction to various bidders and then scattered upon his death. The Armstrong Browning Library has recovered some of his collection, and it now houses the world's largest collection of Browning memorabilia.", "title": "" }, { "paragraph_id": 5, "text": "Elizabeth's work had a major influence on prominent writers of the day, including the American poets Edgar Allan Poe and Emily Dickinson. She is remembered for such poems as \"How Do I Love Thee?\" (Sonnet 43, 1845) and Aurora Leigh (1856).", "title": "" }, { "paragraph_id": 6, "text": "Some of Elizabeth Barrett's family had lived in Jamaica since 1655. Their wealth derived mainly from slave labour from their plantations in the Caribbean. Edward Barrett (1734–1798) was owner of 10,000 acres (40 km) in the estates of Cinnamon Hill, Cornwall, Cambridge, and Oxford in northern Jamaica. Elizabeth's maternal grandfather owned sugar plantations farmed by slaves they bought from Africa, mills, glassworks, and ships that traded between Jamaica and Newcastle in the United Kingdom.", "title": "Life and career" }, { "paragraph_id": 7, "text": "The family wished to hand down their name, stipulating that Barrett always should be held as a surname. In some cases, inheritance was given on condition that the name was used by the beneficiary; the English gentry and \"squirearchy\" had long encouraged this sort of name changing. Given this strong tradition, Elizabeth used \"Elizabeth Barrett Moulton Barrett\" on legal documents, and before she was married, she often signed herself \"Elizabeth Barrett Barrett\" or \"EBB\" (initials which she was able to keep after her wedding). Elizabeth's father chose to raise his family in England, and his business enterprises remained in Jamaica. Elizabeth's mother, Mary Graham Clarke, also owned plantations farmed by enslaved people in the British West Indies.", "title": "Life and career" }, { "paragraph_id": 8, "text": "Elizabeth Barrett Moulton-Barrett was born on (it is supposed) 6 March 1806 in Coxhoe Hall, between the villages of Coxhoe and Kelloe in County Durham, England. Her parents were Edward Barrett Moulton-Barrett and Mary Graham Clarke. However, it has been suggested that, when she was christened on 9 March, she was already three or four months old, and that this was concealed because her parents had married only on 14 May 1805. Although she had already been baptised by a family friend in that first week of her life, she was baptised again, more publicly, on 10 February 1808 at Kelloe parish church, at the same time as her younger brother, Edward (known as Bro). He had been born in June 1807, only 15 months after Elizabeth's stated date of birth. A private christening might seem unlikely for a family of standing, and while Bro's birth was celebrated with a holiday on the family's Caribbean plantations, Elizabeth's was not.", "title": "Life and career" }, { "paragraph_id": 9, "text": "Elizabeth was the eldest of 12 children (eight boys and four girls). Eleven lived to adulthood; one daughter died at the age of 3, when Elizabeth was 8. The children all had nicknames: Elizabeth was Ba. She rode her pony, went for family walks and picnics, socialised with other county families, and participated in home theatrical productions. Unlike her siblings, she immersed herself in books as often as she could get away from the social rituals of her family.", "title": "Life and career" }, { "paragraph_id": 10, "text": "In 1809, the family moved to Hope End, a 500-acre (200 ha) estate near the Malvern Hills in Ledbury, Herefordshire. Her father converted the Georgian house into stables and built a mansion of opulent Turkish design, which his wife described as something from the Arabian Nights' Entertainments.", "title": "Life and career" }, { "paragraph_id": 11, "text": "The interior's brass balustrades, mahogany doors inlaid with mother-of-pearl, and finely carved fireplaces were eventually complemented by lavish landscaping: ponds, grottos, kiosks, an ice house, a hothouse, and a subterranean passage from house to gardens. Her time at Hope End inspired her in later life to write Aurora Leigh (1856), her most ambitious work, which went through more than 20 editions by 1900, but none from 1905 to 1978.", "title": "Life and career" }, { "paragraph_id": 12, "text": "She was educated at home and tutored by Daniel McSwiney with her oldest brother. She began writing verses at the age of four. During the Hope End period, she was an intensely studious, precocious child. She claimed that she was reading novels at age 6, having been entranced by Pope's translations of Homer at age 8, studying Greek at age 10, and writing her own Homeric epic The Battle of Marathon: A Poem at age 11.", "title": "Life and career" }, { "paragraph_id": 13, "text": "In 1820, Mr Barrett privately published The Battle of Marathon, an epic-style poem, but all copies remained within the family. Her mother compiled the child's poetry into collections of \"Poems by Elizabeth B. Barrett\". Her father called her the \"Poet Laureate of Hope End\" and encouraged her work. The result is one of the larger collections of juvenilia of any English writer. Mary Russell Mitford described the young Elizabeth at this time as having \"a slight, delicate figure, with a shower of dark curls falling on each side of a most expressive face; large, tender eyes, richly fringed by dark eyelashes, and a smile like a sunbeam.\"", "title": "Life and career" }, { "paragraph_id": 14, "text": "At about this time, Elizabeth began to battle an illness, which the medical science of the time was unable to diagnose. All three sisters came down with the syndrome, but it lasted only with Elizabeth. She had intense head and spinal pain with loss of mobility. Various biographies link this to a riding accident at the time (she fell while trying to dismount a horse), but there is no evidence to support the link. Sent to recover at the Gloucester spa, she was treated – in the absence of symptoms supporting another diagnosis – for a spinal problem. This illness continued for the rest of her life, and it is believed to be unrelated to the lung disease which she developed in 1837.", "title": "Life and career" }, { "paragraph_id": 15, "text": "She began to take opiates for the pain, laudanum (an opium concoction) followed by morphine, then commonly prescribed. She became dependent on them for much of her adulthood; the use from an early age may well have contributed to her frail health. Biographers such as Alethea Hayter have suggested this dependency have contributed to the wild vividness of her imagination and the poetry that it produced.", "title": "Life and career" }, { "paragraph_id": 16, "text": "By 1821, she had read Mary Wollstonecraft's A Vindication of the Rights of Woman (1792), and she become a passionate supporter of Wollstonecraft's political ideas. The child's intellectual fascination with the classics and metaphysics was reflected in a religious intensity which she later described as \"not the deep persuasion of the mild Christian but the wild visions of an enthusiast.\" The Barretts attended services at the nearest Dissenting chapel, and Edward was active in Bible and missionary societies.", "title": "Life and career" }, { "paragraph_id": 17, "text": "Elizabeth's mother died in 1828, and she is buried at St Michael's Church, Ledbury, next to her daughter Mary. Sarah Graham-Clarke, Elizabeth's aunt, helped to care for the children, and she had clashes with Elizabeth's strong will. In 1831, Elizabeth's grandmother, Elizabeth Moulton, died. Following lawsuits and the abolition of slavery, Mr Barrett incurred great financial and investment losses that forced him to sell Hope End. Although the family was never poor, the place was seized and sold to satisfy creditors. Always secret in his financial dealings, he would not discuss his situation, and the family was haunted by the idea that they might have to move to Jamaica.", "title": "Life and career" }, { "paragraph_id": 18, "text": "From 1833 to 1835, she was living with her family at Belle Vue in Sidmouth. The site has now been renamed Cedar Shade and redeveloped. A blue plaque at the entrance to the site attests to its previous existence. In 1838, some years after the sale of Hope End, the family settled at 50 Wimpole Street, Marylebone, London.", "title": "Life and career" }, { "paragraph_id": 19, "text": "During 1837–1838, the poet was struck with illness again, with symptoms today suggesting tuberculous ulceration of the lungs. The same year, at her physician's insistence, she moved from London to Torquay on the Devonshire coast. Her former home now forms part of the Regina Hotel. Two tragedies then struck. In February 1840, her brother Samuel died of a fever in Jamaica, then her favourite brother Edward (Bro) was drowned in a sailing accident in Torquay in July. These events had a serious effect on her already fragile health. She felt guilty as her father had disapproved of Edward's trip to Torquay. She wrote to Mitford: \"That was a very near escape from madness, absolute hopeless madness\". The family returned to Wimpole Street in 1841.", "title": "Life and career" }, { "paragraph_id": 20, "text": "At Wimpole Street, Elizabeth spent most of her time in her upstairs room. Her health began to improve, but she saw few people other than her immediate family. One of those was John Kenyon, a wealthy friend and distant cousin of the family and patron of the arts. She received comfort from a spaniel named Flush, a gift from Mary Mitford. (Virginia Woolf later fictionalised the life of the dog, making him the protagonist of her 1933 novel Flush: A Biography).", "title": "Life and career" }, { "paragraph_id": 21, "text": "From 1841 to 1844, Elizabeth was prolific in poetry, translation, and prose. The poem The Cry of the Children, published in 1842 in Blackwood's, condemned child labour and helped bring about child-labour reforms by raising support for Lord Shaftesbury's Ten Hours Bill (1844). About the same time, she contributed critical prose pieces to Richard Henry Horne's A New Spirit of the Age, including a laudatory essay on Thomas Carlyle.", "title": "Life and career" }, { "paragraph_id": 22, "text": "In 1844, she published the two-volume Poems, which included \"A Drama of Exile\", \"A Vision of Poets\", and \"Lady Geraldine's Courtship\", and two substantial critical essays for 1842 issues of The Athenaeum. A self-proclaimed \"adorer of Carlyle\", she sent a copy to him as \"a tribute of admiration & respect\", which began a correspondence between them. \"Since she was not burdened with any domestic duties expected of her sisters, Barrett Browning could now devote herself entirely to the life of the mind, cultivating an enormous correspondence, reading widely\". Her prolific output made her a rival to Tennyson as a candidate for poet laureate in 1850 on the death of Wordsworth.", "title": "Life and career" }, { "paragraph_id": 23, "text": "A Royal Society of Arts blue plaque now commemorates Elizabeth at 50 Wimpole Street.", "title": "Life and career" }, { "paragraph_id": 24, "text": "Her 1844 volume Poems made her one of the more popular writers in the country and inspired Robert Browning to write to her. He wrote \"I love your verses with all my heart, dear Miss Barrett,\" praising their \"fresh strange music, the affluent language, the exquisite pathos and true new brave thought.\"", "title": "Life and career" }, { "paragraph_id": 25, "text": "Kenyon arranged for Browning to meet Elizabeth on 20 May 1845, in her rooms, and so began one of the most famous courtships in literature. Elizabeth had produced a large amount of work, but Browning had a great influence on her subsequent writing as did she on his: Two of Barrett's most famous pieces were written after she met Browning, Sonnets from the Portuguese and Aurora Leigh. Robert's Men and Women is also a product of that time.", "title": "Life and career" }, { "paragraph_id": 26, "text": "Some critics state that her activity was, in some ways, in decay before she met Browning: \"Until her relationship with Robert Browning began in 1845, Barrett's willingness to engage in public discourse about social issues and about aesthetic issues in poetry, which had been so strong in her youth, gradually diminished, as did her physical health. As an intellectual presence and a physical being, she was becoming a shadow of herself.\"", "title": "Life and career" }, { "paragraph_id": 27, "text": "The courtship and marriage between Robert Browning and Elizabeth were made secretly as she knew her father would disapprove. After a private marriage at St Marylebone Parish Church, they honeymooned in Paris and then moved to Italy in September 1846, which became their home almost continuously until her death. Elizabeth's loyal lady's maid Elizabeth Wilson witnessed the marriage and accompanied the couple to Italy.", "title": "Life and career" }, { "paragraph_id": 28, "text": "Mr Barrett disinherited Elizabeth as he did each of his children who married. Elizabeth had foreseen her father's anger but had not anticipated her brothers' rejection. As Elizabeth had some money of her own, the couple were reasonably comfortable in Italy. The Brownings were well respected and even famous. Elizabeth grew stronger, and in 1849, at the age of 43, between four miscarriages, she gave birth to a son, Robert Wiedeman Barrett Browning, whom they called Pen. Their son later married, but had no legitimate children.", "title": "Life and career" }, { "paragraph_id": 29, "text": "At her husband's insistence, Elizabeth's second edition of Poems included her love sonnets; as a result, her popularity increased (as did critical regard), and her artistic position was confirmed. During the years of her marriage, her literary reputation far surpassed that of her poet-husband; when visitors came to their home in Florence, she was invariably the greater attraction.", "title": "Life and career" }, { "paragraph_id": 30, "text": "The couple came to know a wide circle of artists and writers, including William Makepeace Thackeray, sculptor Harriet Hosmer (who, she wrote, seemed to be the \"perfectly emancipated female\") and Harriet Beecher Stowe. In 1849, she met Margaret Fuller; Carlyle in 1851; French novelist George Sand in 1852, whom she had long admired. Among her intimate friends in Florence was the writer Isa Blagden, whom she encouraged to write novels. They met Alfred Tennyson in Paris, and John Forster, Samuel Rogers and the Carlyles in London, later befriending Charles Kingsley and John Ruskin.", "title": "Life and career" }, { "paragraph_id": 31, "text": "After the death of an old friend, G. B. Hunter, and then of her father, Barrett Browning's health started to deteriorate. The Brownings moved from Florence to Siena, residing at the Villa Alberti. Engrossed in Italian politics, she issued a small volume of political poems titled Poems before Congress (1860) \"most of which were written to express her sympathy with the Italian cause after the outbreak of fighting in 1859\". They caused a furore in Britain, and the conservative magazines Blackwood's and the Saturday Review labelled her a fanatic. She dedicated this book to her husband. Her last work was A Musical Instrument, published posthumously.", "title": "Life and career" }, { "paragraph_id": 32, "text": "Barrett Browning's sister Henrietta died in November 1860. The couple spent the winter of 1860–1861 in Rome where Barrett Browning's health deteriorated, and they returned to Florence in early June 1861. She became gradually weaker, using morphine to ease her pain. She died on 29 June 1861 in her husband's arms. Browning said that she died \"smilingly, happily, and with a face like a girl's...Her last word was...'Beautiful' \". She was buried in the Protestant English Cemetery of Florence. \"On Monday July 1 the shops in the area around Casa Guidi were closed, while Elizabeth was mourned with unusual demonstrations.\" The nature of her illness is still unclear. Some modern scientists speculate her illness may have been hypokalemic periodic paralysis, a genetic disorder that causes weakness and many of the other symptoms she described.", "title": "Life and career" }, { "paragraph_id": 33, "text": "Barrett Browning's first known poem \"On the Cruelty of Forcement to Man\" was written at the age of 6 or 8. The manuscript, which protests against impressment, is currently in the Berg Collection of the New York Public Library; the exact date is controversial because the \"2\" in the date 1812 is written over something else that is scratched out.", "title": "Life and career" }, { "paragraph_id": 34, "text": "Her first independent publication was \"Stanzas Excited by Reflections on the Present State of Greece\" in The New Monthly Magazine of May 1821; followed two months later by \"Thoughts Awakened by Contemplating a Piece of the Palm which Grows on the Summit of the Acropolis at Athens\".", "title": "Life and career" }, { "paragraph_id": 35, "text": "Her first collection of poems, An Essay on Mind, with Other Poems, was published in 1826 and reflected her passion for Byron and Greek politics. Its publication drew the attention of Hugh Stuart Boyd, a blind scholar of the Greek language, and of Uvedale Price, another Greek scholar, with whom she maintained sustained correspondence. Among other neighbours was Mrs James Martin from Colwall, with whom she corresponded throughout her life. Later, at Boyd's suggestion, she translated Aeschylus' Prometheus Bound (published in 1833; retranslated in 1850). During their friendship, Barrett studied Greek literature, including Homer, Pindar and Aristophanes.", "title": "Life and career" }, { "paragraph_id": 36, "text": "Elizabeth opposed slavery and published two poems highlighting the barbarity of the institution and her support for the abolitionist cause: \"The Runaway Slave at Pilgrim's Point\" and \"A Curse for a Nation\". The first depicts an enslaved woman whipped, raped, and made pregnant cursing her enslavers. Elizabeth declared herself glad that the slaves were \"virtually free\" when the Slavery Abolition Act passed in the British Parliament despite the fact that her father believed that abolition would ruin his business.", "title": "Life and career" }, { "paragraph_id": 37, "text": "The date of publication of these poems is in dispute, but her position on slavery in the poems is clear and may have led to a rift between Elizabeth and her father. She wrote to John Ruskin in 1855 \"I belong to a family of West Indian slaveholders, and if I believed in curses, I should be afraid\". Her father and uncle were unaffected by the Baptist War (1831–1832) and continued to own slaves until passage of the Slavery Abolition Act.", "title": "Life and career" }, { "paragraph_id": 38, "text": "In London, John Kenyon introduced Elizabeth to literary figures including William Wordsworth, Mary Russell Mitford, Samuel Taylor Coleridge, Alfred Tennyson and Thomas Carlyle. Elizabeth continued to write, contributing \"The Romaunt of Margaret\", \"The Romaunt of the Page\", \"The Poet's Vow\" and other pieces to various periodicals. She corresponded with other writers, including Mary Russell Mitford, who became a close friend and who supported Elizabeth's literary ambitions.", "title": "Life and career" }, { "paragraph_id": 39, "text": "In 1838 The Seraphim and Other Poems appeared, the first volume of Elizabeth's mature poetry to appear under her own name.", "title": "Life and career" }, { "paragraph_id": 40, "text": "Sonnets from the Portuguese was published in 1850. There is debate about the origin of the title. Some say it refers to the series of sonnets of the 16th-century Portuguese poet Luís de Camões. However, \"my little Portuguese\" was a pet name that Browning had adopted for Elizabeth and this may have some connection.", "title": "Life and career" }, { "paragraph_id": 41, "text": "The verse-novel Aurora Leigh, her most ambitious and perhaps the most popular of her longer poems, appeared in 1856. It is the story of a female writer making her way in life, balancing work and love, and based on Elizabeth's own experiences. Aurora Leigh was an important influence on Susan B. Anthony's thinking about the traditional roles of women, with regard to marriage versus independent individuality. The North American Review praised Elizabeth's poem: \"Mrs. Browning's poems are, in all respects, the utterance of a woman — of a woman of great learning, rich experience, and powerful genius, uniting to her woman's nature the strength which is sometimes thought peculiar to a man.\"", "title": "Life and career" }, { "paragraph_id": 42, "text": "Much of Barrett Browning's work carries a religious theme. She had read and studied such works as Milton's Paradise Lost and Dante's Inferno. She says in her writing, \"We want the sense of the saturation of Christ's blood upon the souls of our poets, that it may cry through them in answer to the ceaseless wail of the Sphinx of our humanity, expounding agony into renovation. Something of this has been perceived in art when its glory was at the fullest. Something of a yearning after this may be seen among the Greek Christian poets, something which would have been much with a stronger faculty\". She believed that \"Christ's religion is essentially poetry – poetry glorified\". She explored the religious aspect in many of her poems, especially in her early work, such as the sonnets.", "title": "Spiritual influence" }, { "paragraph_id": 43, "text": "She was interested in theological debate, had learned Hebrew and read the Hebrew Bible. Her seminal Aurora Leigh, for example, features religious imagery and allusion to the apocalypse. The critic Cynthia Scheinberg notes that female characters in Aurora Leigh and her earlier work \"The Virgin Mary to the Child Jesus\" allude to Miriam, sister and caregiver to Moses. These allusions to Miriam in both poems mirror the way in which Barrett Browning herself drew from Jewish history, while distancing herself from it, in order to maintain the cultural norms of a Christian woman poet of the Victorian Age.", "title": "Spiritual influence" }, { "paragraph_id": 44, "text": "In the correspondence Barrett Browning kept with the Reverend William Merry from 1843 to 1844 on predestination and salvation by works, she identifies herself as a Congregationalist: \"I am not a Baptist — but a Congregational Christian, — in the holding of my private opinions.\"", "title": "Spiritual influence" }, { "paragraph_id": 45, "text": "In 1892, Ledbury, Herefordshire, held a design competition to build an Institute in honour of Barrett Browning. Brightwen Binyon beat 44 other designs. It was based on the timber-framed Market House, which was opposite the site, and was completed in 1896. However, Nikolaus Pevsner was not impressed by its style. It was used as a public library from 1938 to 2021, when new library facilities were provided for the town, and is now the headquarters of the Ledbury Poetry Festival. It has been Grade II-listed since 2007.", "title": "Barrett Browning Institute" }, { "paragraph_id": 46, "text": "How Do I Love Thee? How do I love thee? Let me count the ways. I love thee to the depth and breadth and height My soul can reach, when feeling out of sight For the ends of being and ideal grace. I love thee to the level of every day's Most quiet need, by sun and candle-light. I love thee freely, as men strive for right. I love thee purely, as they turn from praise. I love thee with the passion put to use In my old griefs, and with my childhood's faith. I love thee with a love I seemed to lose With my lost saints. I love thee with the breath, Smiles, tears, of all my life; and, if God choose, I shall but love thee better after death.", "title": "Critical reception" }, { "paragraph_id": 47, "text": "Sonnet XLIII from Sonnets from the Portuguese, 1845 (published 1850)", "title": "Critical reception" }, { "paragraph_id": 48, "text": "Barrett Browning was widely popular in the United Kingdom and the United States during her lifetime. Edgar Allan Poe was inspired by her poem Lady Geraldine's Courtship and specifically borrowed the poem's metre for his poem The Raven. Poe had reviewed Barrett Browning's work in the January 1845 issue of the Broadway Journal, writing that \"her poetic inspiration is the highest – we can conceive of nothing more august. Her sense of Art is pure in itself.\" In return, she praised The Raven, and Poe dedicated his 1845 collection The Raven and Other Poems to her, referring to her as \"the noblest of her sex\".", "title": "Critical reception" }, { "paragraph_id": 49, "text": "Barrett Browning's poetry greatly influenced Emily Dickinson, who admired her as a woman of achievement. Her popularity in the United States and Britain was advanced by her stands against social injustice, including slavery in the United States, injustice toward Italians from their foreign rulers, and child labour.", "title": "Critical reception" }, { "paragraph_id": 50, "text": "Lilian Whiting published a biography of Barrett Browning (1899) which describes her as \"the most philosophical poet\" and depicts her life as \"a Gospel of applied Christianity\". To Whiting, the term \"art for art's sake\" did not apply to Barrett Browning's work, as each poem, distinctively purposeful, was borne of a more \"honest vision\". In this critical analysis, Whiting portrays Barrett Browning as a poet who uses knowledge of Classical literature with an \"intuitive gift of spiritual divination\". In Elizabeth Barrett Browning, Angela Leighton suggests that the portrayal of Barrett Browning as the \"pious iconography of womanhood\" has distracted us from her poetic achievements. Leighton cites the 1931 play by Rudolf Besier The Barretts of Wimpole Street as evidence that 20th-century literary criticism of Barrett Browning's work has suffered more as a result of her popularity than poetic ineptitude. The play was popularized by actress Katharine Cornell, for whom it became a signature role. It was an enormous success, both artistically and commercially, and was revived several times and adapted twice into movies. Sampson, however, considers the play to have been the most damaging cause of false myths about Elizabeth, and particularly the relationship with her, allegedly 'tyrannical', father.", "title": "Critical reception" }, { "paragraph_id": 51, "text": "Throughout the 20th century, literary criticism of Barrett Browning's poetry remained sparse until her poems were discovered by the women's movement. She once described herself as being inclined to reject several women's rights principles, suggesting in letters to Mary Russell Mitford and her husband that she believed that there was an inferiority of intellect in women. In Aurora Leigh, however, she created a strong and independent woman who embraces both work and love. Leighton writes that because Elizabeth participates in the literary world, where voice and diction are dominated by perceived masculine superiority, she \"is defined only in mysterious opposition to everything that distinguishes the male subject who writes...\" A five-volume scholarly edition of her works was published in 2010, the first in over a century.", "title": "Critical reception" } ]
Elizabeth Barrett Browning was an English poet of the Victorian era, popular in Britain and the United States during her lifetime and frequently anthologised after her death; her work received renewed attention following the feminist scholarship of the 1970s and 1980s, and greater recognition of women writers in English. Born in County Durham, the eldest of 12 children, Elizabeth Barrett wrote poetry from the age of eleven. Her mother's collection of her poems forms one of the largest extant collections of juvenilia by any English writer. At 15, she became ill, suffering intense head and spinal pain for the rest of her life. Later in life, she also developed lung problems, possibly tuberculosis. She took laudanum for the pain from an early age, which is likely to have contributed to her frail health. In the 1840s, Elizabeth was introduced to literary society through her distant cousin and patron John Kenyon. Her first adult collection of poems was published in 1838, and she wrote prolifically from 1841 to 1844, producing poetry, translation, and prose. She campaigned for the abolition of slavery, and her work helped influence reform in child labour legislation. Her prolific output made her a rival to Tennyson as a candidate for poet laureate on the death of Wordsworth. Elizabeth's volume Poems (1844) brought her great success, attracting the admiration of the writer Robert Browning. Their correspondence, courtship, and marriage were carried out in secret, for fear of her father's disapproval. Following the wedding, she was indeed disinherited by her father. In 1846, the couple moved to Italy, where she lived for the rest of her life. Elizabeth died in Florence in 1861. A collection of her later poems were published by her husband shortly after her death. They had a son, known as "Pen". Pen devoted himself to painting until his eyesight began to fail later in life. He also built a large collection of manuscripts and memorabilia of his parents, but because he died intestate, it was sold by public auction to various bidders and then scattered upon his death. The Armstrong Browning Library has recovered some of his collection, and it now houses the world's largest collection of Browning memorabilia. Elizabeth's work had a major influence on prominent writers of the day, including the American poets Edgar Allan Poe and Emily Dickinson. She is remembered for such poems as "How Do I Love Thee?" and Aurora Leigh (1856).
2001-07-30T18:59:40Z
2023-11-19T15:14:14Z
[ "Template:Short description", "Template:Use dmy dates", "Template:Verification needed", "Template:Cite book", "Template:UK National Archives ID", "Template:Curlie", "Template:Quote box", "Template:Elizabeth Barrett Browning", "Template:Reflist", "Template:Gutenberg author", "Template:Librivox author", "Template:Internet Archive author", "Template:Webarchive", "Template:StandardEbooks", "Template:New Woman (late 19th century)", "Template:Robert Browning", "Template:Portalbar", "Template:Infobox writer", "Template:Convert", "Template:Cite news", "Template:Refend", "Template:Sisterlinks", "Template:Authority control", "Template:Use British English", "Template:Distinguish", "Template:Circa", "Template:Noteslist", "Template:Cite web", "Template:Refbegin", "Template:Library resources box", "Template:ISBN", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Elizabeth_Barrett_Browning
9,628
Enlil
Enlil, later known as Elil, is an ancient Mesopotamian god associated with wind, air, earth, and storms. He is first attested as the chief deity of the Sumerian pantheon, but he was later worshipped by the Akkadians, Babylonians, Assyrians, and Hurrians. Enlil's primary center of worship was the Ekur temple in the city of Nippur, which was believed to have been built by Enlil himself and was regarded as the "mooring-rope" of heaven and earth. He is also sometimes referred to in Sumerian texts as Nunamnir. According to one Sumerian hymn, Enlil himself was so holy that not even the other gods could look upon him. Enlil rose to prominence during the twenty-fourth century BC with the rise of Nippur. His cult fell into decline after Nippur was sacked by the Elamites in 1230 BC and he was eventually supplanted as the chief god of the Mesopotamian pantheon by the Babylonian national god Marduk. Enlil plays a vital role in the Sumerian creation myth; he separates An (heaven) from Ki (earth), thus making the world habitable for humans. In the Sumerian flood myth, Enlil rewards Ziusudra with immortality for having survived the flood and, in the Babylonian flood myth, Enlil is the cause of the flood himself, having sent the flood to exterminate the human race, who made too much noise and prevented him from sleeping. The myth of Enlil and Ninlil is about Enlil's serial seduction of the goddess Ninlil in various guises, resulting in the conception of the moon-god Nanna and the Underworld deities Nergal, Ninazu, and Enbilulu. Enlil was regarded as the inventor of the mattock and the patron of agriculture. Enlil also features prominently in several myths involving his son Ninurta, including Anzû and the Tablet of Destinies and Lugale. Enlil's name comes from ancient Sumerian EN (𒂗), meaning "lord" and LÍL (𒆤), the meaning of which is contentious, and which has sometimes been interpreted as meaning winds as a weather phenomenon (making Enlil a weather and sky god, "Lord Wind" or "Lord Storm"), or alternatively as signifying a spirit or phantom whose presence may be felt as stirring of the air, or possibly as representing a partial Semitic loanword rather than a Sumerian word at all. Enlil's name is not a genitive construction, suggesting that Enlil was seen as the personification of LÍL rather than merely the cause of LÍL. Piotr Steinkeller has written that the meaning of LÍL may not actually be a clue to a specific divine domain of Enlil's, whether storms, spirits, or otherwise, since Enlil may have been "a typical universal god [...] without any specific domain." Enlil who sits broadly on the white dais, on the lofty dais, who perfects the decrees of power, lordship, and princeship, the earth-gods bow down in fear before him, the heaven-gods humble themselves before him... Enlil was the patron god of the Sumerian city-state of Nippur and his main center of worship was the Ekur temple located there. The name of the temple literally means "Mountain House" in ancient Sumerian. The Ekur was believed to have been built and established by Enlil himself. It was believed to be the "mooring-rope" of heaven and earth, meaning that it was seen as "a channel of communication between earth and heaven". A hymn written during the reign of Ur-Nammu, the founder of the Third Dynasty of Ur, describes the E-kur in great detail, stating that its gates were carved with scenes of Imdugud, a lesser deity sometimes shown as a giant bird, slaying a lion and an eagle snatching up a sinner. The Sumerians believed that the sole purpose of humanity's existence was to serve the gods. They thought that a god's statue was a physical embodiment of the god himself. As such, cult statues were given constant care and attention and a set of priests were assigned to tend to them. People worshipped Enlil by offering food and other human necessities to him. The food, which was ritually laid out before the god's cult statue in the form of a feast, was believed to be Enlil's daily meal, but, after the ritual, it would be distributed among his priests. These priests were also responsible for changing the cult statue's clothing. The Sumerians envisioned Enlil as a benevolent, fatherly deity, who watches over humanity and cares for their well-being. One Sumerian hymn describes Enlil as so glorious that even the other gods could not look upon him. The same hymn also states that, without Enlil, civilization could not exist. Enlil's epithets include titles such as "the Great Mountain" and "King of the Foreign Lands". Enlil is also sometimes described as a "raging storm", a "wild bull", and a "merchant". The Mesopotamians envisioned him as a creator, a father, a king, and the supreme lord of the universe. He was also known as "Nunamnir" and is referred to in at least one text as the "East Wind and North Wind". Kings regarded Enlil as a model ruler and sought to emulate his example. Enlil was said to be supremely just and intolerant towards evil. Rulers from all over Sumer would travel to Enlil's temple in Nippur to be legitimized. They would return Enlil's favor by devoting lands and precious objects to his temple as offerings. Nippur was the only Sumerian city-state that never built a palace; this was intended to symbolize the city's importance as the center of the cult of Enlil by showing that Enlil himself was the city's king. Even during the Babylonian Period, when Marduk had superseded Enlil as the supreme god, Babylonian kings still traveled to the holy city of Nippur to seek recognition of their right to rule. Enlil first rose to prominence during the twenty-fourth century BC, when the importance of the god An began to wane. During this time period, Enlil and An are frequently invoked together in inscriptions. Enlil remained the supreme god in Mesopotamia throughout the Amorite Period, with Amorite monarchs proclaiming Enlil as the source of their legitimacy. Enlil's importance began to wane after the Babylonian king Hammurabi conquered Sumer. The Babylonians worshipped Enlil under the name "Elil" and the Hurrians syncretized him with their own god Kumarbi. In one Hurrian ritual, Enlil and Apantu are invoked as "the father and mother of Išḫara". Enlil is also invoked alongside Ninlil as a member of "the mighty and firmly established gods". During the Kassite Period (c. 1592 BC – 1155 BC), Nippur briefly managed to regain influence in the region and Enlil rose to prominence once again. From around 1300 BC onwards, Enlil was syncretized with the Assyrian national god Aššur, who was the most important deity in the Assyrian pantheon. Then, in 1230 BC, the Elamites attacked Nippur and the city fell into decline, taking the cult of Enlil along with it. Approximately one hundred years later, Enlil's role as the head of the pantheon was given to Marduk, the national god of the Babylonians. Enlil was represented by the symbol of a horned cap, which consisted of up to seven superimposed pairs of ox-horns. Such crowns were an important symbol of divinity; gods had been shown wearing them ever since the third millennium BC. The horned cap remained consistent in form and meaning from the earliest days of Sumerian prehistory up until the time of the Persian conquest and beyond. The Sumerians had a complex numerological system, in which certain numbers were believed to hold special ritual significance. Within this system, Enlil was associated with the number fifty, which was considered sacred to him. Enlil was part of a triad of deities, which also included An and Enki. These three deities together were the embodiment of all the fixed stars in the night sky. An was identified with all the stars of the equatorial sky, Enlil with those of the northern sky, and Enki with those of the southern sky. The path of Enlil's celestial orbit was a continuous, symmetrical circle around the north celestial pole, but those of An and Enki were believed to intersect at various points. Enlil was associated with the constellation Boötes. The main source of information about the Sumerian creation myth is the prologue to the epic poem Gilgamesh, Enkidu, and the Netherworld (ETCSL 1.8.1.4), which briefly describes the process of creation: originally, there was only Nammu, the primeval sea. Then, Nammu gave birth to An, the sky, and Ki, the earth. An and Ki mated with each other, causing Ki to give birth to Enlil. Enlil separated An from Ki and carried off the earth as his domain, while An carried off the sky. Enlil marries his mother, Ki, and from this union all the plant and animal life on earth is produced. Enlil and Ninlil (ETCSL 1.2.1) is a nearly complete 152-line Sumerian poem describing the affair between Enlil and the goddess Ninlil. First, Ninlil's mother Nunbarshegunu instructs Ninlil to go bathe in the river. Ninlil goes to the river, where Enlil seduces her and impregnates her with their son, the moon-god Nanna. Because of this, Enlil is banished to Kur, the Sumerian underworld. Ninlil follows Enlil to the underworld, where he impersonates the "man of the gate". Ninlil demands to know where Enlil has gone, but Enlil, still impersonating the gatekeeper, refuses to answer. He then seduces Ninlil and impregnates her with Nergal, the god of death. The same scenario repeats, only this time Enlil instead impersonates the "man of the river of the nether world, the man-devouring river"; once again, he seduces Ninlil and impregnates her with the god Ninazu. Finally, Enlil impersonates the "man of the boat"; once again, he seduces Ninlil and impregnates her with Enbilulu, the "inspector of the canals". The story of Enlil's courtship with Ninlil is primarily a genealogical myth invented to explain the origins of the moon-god Nanna, as well as the various gods of the Underworld, but it is also, to some extent, a coming-of-age story describing Enlil and Ninlil's emergence from adolescence into adulthood. The story also explains Ninlil's role as Enlil's consort; in the poem, Ninlil declares, "As Enlil is your master, so am I also your mistress!" The story is also historically significant because, if the current interpretation of it is correct, it is the oldest known myth in which a god changes shape. In the Sumerian version of the flood story (ETCSL 1.7.4), the causes of the flood are unclear because the portion of the tablet recording the beginning of the story has been destroyed. Somehow, a mortal known as Ziusudra manages to survive the flood, likely through the help of the god Enki. The tablet begins in the middle of the description of the flood. The flood lasts for seven days and seven nights before it subsides. Then, Utu, the god of the Sun, emerges. Ziusudra opens a window in the side of the boat and falls down prostrate before the god. Next, he sacrifices an ox and a sheep in honor of Utu. At this point, the text breaks off again. When it picks back up, Enlil and An are in the midst of declaring Ziusudra immortal as an honor for having managed to survive the flood. The remaining portion of the tablet after this point is destroyed. In the later Akkadian version of the flood story, recorded in the Epic of Gilgamesh, Enlil actually causes the flood, seeking to annihilate every living thing on earth because the humans, who are vastly overpopulated, make too much noise and prevent him from sleeping. In this version of the story, the hero is Utnapishtim, who is warned ahead of time by Ea, the Babylonian equivalent of Enki, that the flood is coming. The flood lasts for seven days; when it ends, Ishtar, who had mourned the destruction of humanity, promises Utnapishtim that Enlil will never cause a flood again. When Enlil sees that Utnapishtim and his family have survived, he is outraged, but his son Ninurta speaks up in favor of humanity, arguing that, instead of causing floods, Enlil should simply ensure that humans never become overpopulated by reducing their numbers using wild animals and famines. Enlil goes into the boat; Utnapishtim and his wife bow before him. Enlil, now appeased, grants Utnapishtim immortality as a reward for his loyalty to the gods. Plucks at the roots, tears at the crown, the pickax spares the... plants; the pickax, its fate is decreed by father Enlil, the pickax is exalted. A nearly complete 108-line poem from the Early Dynastic Period (c. 2900 – 2350 BC) describes Enlil's invention of the mattock, a key agricultural pick, hoe, ax, or digging tool of the Sumerians. In the poem, Enlil conjures the mattock into existence and decrees its fate. The mattock is described as gloriously beautiful; it is made of pure gold and its head is carved from lapis lazuli. Enlil gives the tool over to the humans, who use it to build cities, subjugate their people, and pull up weeds. Enlil was believed to aid in the growth of plants. The Sumerian poem Enlil Chooses the Farmer-God (ETCSL 5.3.3) describes how Enlil, hoping "to establish abundance and prosperity", creates two gods Emesh and Enten, a shepherd and a farmer, respectively. The two gods argue and Emesh lays claim to Enten's position. They take the dispute before Enlil, who rules in favor of Enten; the two gods rejoice and reconcile. In the Sumerian poem Lugale (ETCSL 1.6.2), Enlil gives advice to his son, the god Ninurta, advising him on a strategy to slay the demon Asag. This advice is relayed to Ninurta by way of Sharur, his enchanted talking mace, which had been sent by Ninurta to the realm of the gods to seek counsel from Enlil directly. In the Old, Middle, and Late Babylonian myth of Anzû and the Tablet of Destinies, the Anzû, a giant, monstrous bird, betrays Enlil and steals the Tablet of Destinies, a sacred clay tablet belonging to Enlil that grants him his authority, while Enlil is preparing for a bath. The rivers dry up and the gods are stripped of their powers. The gods send Adad, Girra, and Shara to defeat the Anzû, but all of them fail. Finally, Ea proposes that the gods should send Ninurta, Enlil's son. Ninurta successfully defeats the Anzû and returns the Tablet of Destinies to his father. As a reward, Ninurta is a granted a prominent seat on the council of the gods. A badly damaged text from the Neo-Assyrian Period (911 — 612 BC) describes Marduk leading his army of Anunnaki into the sacred city of Nippur and causing a disturbance. The disturbance causes a flood, which forces the resident gods of Nippur under the leadership of Enlil to take shelter in the Eshumesha temple to Ninurta. Enlil is enraged at Marduk's transgression and orders the gods of Eshumesha to take Marduk and the other Anunnaki as prisoners. The Anunnaki are captured, but Marduk appoints his front-runner Mushteshirhablim to lead a revolt against the gods of Eshumesha and sends his messenger Neretagmil to alert Nabu, the god of literacy. When the Eshumesha gods hear Nabu speak, they come out of their temple to search for him. Marduk defeats the Eshumesha gods and takes 360 of them as prisoners of war, including Enlil himself. Enlil protests that the Eshumesha gods are innocent, so Marduk puts them on trial before the Anunnaki. The text ends with a warning from Damkianna (another name for Ninhursag) to the gods and to humanity, pleading them not to repeat the war between the Anunnaki and the gods of Eshumesha.
[ { "paragraph_id": 0, "text": "Enlil, later known as Elil, is an ancient Mesopotamian god associated with wind, air, earth, and storms. He is first attested as the chief deity of the Sumerian pantheon, but he was later worshipped by the Akkadians, Babylonians, Assyrians, and Hurrians. Enlil's primary center of worship was the Ekur temple in the city of Nippur, which was believed to have been built by Enlil himself and was regarded as the \"mooring-rope\" of heaven and earth. He is also sometimes referred to in Sumerian texts as Nunamnir. According to one Sumerian hymn, Enlil himself was so holy that not even the other gods could look upon him. Enlil rose to prominence during the twenty-fourth century BC with the rise of Nippur. His cult fell into decline after Nippur was sacked by the Elamites in 1230 BC and he was eventually supplanted as the chief god of the Mesopotamian pantheon by the Babylonian national god Marduk.", "title": "" }, { "paragraph_id": 1, "text": "Enlil plays a vital role in the Sumerian creation myth; he separates An (heaven) from Ki (earth), thus making the world habitable for humans. In the Sumerian flood myth, Enlil rewards Ziusudra with immortality for having survived the flood and, in the Babylonian flood myth, Enlil is the cause of the flood himself, having sent the flood to exterminate the human race, who made too much noise and prevented him from sleeping. The myth of Enlil and Ninlil is about Enlil's serial seduction of the goddess Ninlil in various guises, resulting in the conception of the moon-god Nanna and the Underworld deities Nergal, Ninazu, and Enbilulu. Enlil was regarded as the inventor of the mattock and the patron of agriculture. Enlil also features prominently in several myths involving his son Ninurta, including Anzû and the Tablet of Destinies and Lugale.", "title": "" }, { "paragraph_id": 2, "text": "Enlil's name comes from ancient Sumerian EN (𒂗), meaning \"lord\" and LÍL (𒆤), the meaning of which is contentious, and which has sometimes been interpreted as meaning winds as a weather phenomenon (making Enlil a weather and sky god, \"Lord Wind\" or \"Lord Storm\"), or alternatively as signifying a spirit or phantom whose presence may be felt as stirring of the air, or possibly as representing a partial Semitic loanword rather than a Sumerian word at all. Enlil's name is not a genitive construction, suggesting that Enlil was seen as the personification of LÍL rather than merely the cause of LÍL.", "title": "Etymology" }, { "paragraph_id": 3, "text": "Piotr Steinkeller has written that the meaning of LÍL may not actually be a clue to a specific divine domain of Enlil's, whether storms, spirits, or otherwise, since Enlil may have been \"a typical universal god [...] without any specific domain.\"", "title": "Etymology" }, { "paragraph_id": 4, "text": "Enlil who sits broadly on the white dais, on the lofty dais, who perfects the decrees of power, lordship, and princeship, the earth-gods bow down in fear before him, the heaven-gods humble themselves before him...", "title": "Worship" }, { "paragraph_id": 5, "text": "Enlil was the patron god of the Sumerian city-state of Nippur and his main center of worship was the Ekur temple located there. The name of the temple literally means \"Mountain House\" in ancient Sumerian. The Ekur was believed to have been built and established by Enlil himself. It was believed to be the \"mooring-rope\" of heaven and earth, meaning that it was seen as \"a channel of communication between earth and heaven\". A hymn written during the reign of Ur-Nammu, the founder of the Third Dynasty of Ur, describes the E-kur in great detail, stating that its gates were carved with scenes of Imdugud, a lesser deity sometimes shown as a giant bird, slaying a lion and an eagle snatching up a sinner.", "title": "Worship" }, { "paragraph_id": 6, "text": "The Sumerians believed that the sole purpose of humanity's existence was to serve the gods. They thought that a god's statue was a physical embodiment of the god himself. As such, cult statues were given constant care and attention and a set of priests were assigned to tend to them. People worshipped Enlil by offering food and other human necessities to him. The food, which was ritually laid out before the god's cult statue in the form of a feast, was believed to be Enlil's daily meal, but, after the ritual, it would be distributed among his priests. These priests were also responsible for changing the cult statue's clothing.", "title": "Worship" }, { "paragraph_id": 7, "text": "The Sumerians envisioned Enlil as a benevolent, fatherly deity, who watches over humanity and cares for their well-being. One Sumerian hymn describes Enlil as so glorious that even the other gods could not look upon him. The same hymn also states that, without Enlil, civilization could not exist. Enlil's epithets include titles such as \"the Great Mountain\" and \"King of the Foreign Lands\". Enlil is also sometimes described as a \"raging storm\", a \"wild bull\", and a \"merchant\". The Mesopotamians envisioned him as a creator, a father, a king, and the supreme lord of the universe. He was also known as \"Nunamnir\" and is referred to in at least one text as the \"East Wind and North Wind\".", "title": "Worship" }, { "paragraph_id": 8, "text": "Kings regarded Enlil as a model ruler and sought to emulate his example. Enlil was said to be supremely just and intolerant towards evil. Rulers from all over Sumer would travel to Enlil's temple in Nippur to be legitimized. They would return Enlil's favor by devoting lands and precious objects to his temple as offerings. Nippur was the only Sumerian city-state that never built a palace; this was intended to symbolize the city's importance as the center of the cult of Enlil by showing that Enlil himself was the city's king. Even during the Babylonian Period, when Marduk had superseded Enlil as the supreme god, Babylonian kings still traveled to the holy city of Nippur to seek recognition of their right to rule.", "title": "Worship" }, { "paragraph_id": 9, "text": "Enlil first rose to prominence during the twenty-fourth century BC, when the importance of the god An began to wane. During this time period, Enlil and An are frequently invoked together in inscriptions. Enlil remained the supreme god in Mesopotamia throughout the Amorite Period, with Amorite monarchs proclaiming Enlil as the source of their legitimacy. Enlil's importance began to wane after the Babylonian king Hammurabi conquered Sumer. The Babylonians worshipped Enlil under the name \"Elil\" and the Hurrians syncretized him with their own god Kumarbi. In one Hurrian ritual, Enlil and Apantu are invoked as \"the father and mother of Išḫara\". Enlil is also invoked alongside Ninlil as a member of \"the mighty and firmly established gods\".", "title": "Worship" }, { "paragraph_id": 10, "text": "During the Kassite Period (c. 1592 BC – 1155 BC), Nippur briefly managed to regain influence in the region and Enlil rose to prominence once again. From around 1300 BC onwards, Enlil was syncretized with the Assyrian national god Aššur, who was the most important deity in the Assyrian pantheon. Then, in 1230 BC, the Elamites attacked Nippur and the city fell into decline, taking the cult of Enlil along with it. Approximately one hundred years later, Enlil's role as the head of the pantheon was given to Marduk, the national god of the Babylonians.", "title": "Worship" }, { "paragraph_id": 11, "text": "Enlil was represented by the symbol of a horned cap, which consisted of up to seven superimposed pairs of ox-horns. Such crowns were an important symbol of divinity; gods had been shown wearing them ever since the third millennium BC. The horned cap remained consistent in form and meaning from the earliest days of Sumerian prehistory up until the time of the Persian conquest and beyond.", "title": "Iconography" }, { "paragraph_id": 12, "text": "The Sumerians had a complex numerological system, in which certain numbers were believed to hold special ritual significance. Within this system, Enlil was associated with the number fifty, which was considered sacred to him. Enlil was part of a triad of deities, which also included An and Enki. These three deities together were the embodiment of all the fixed stars in the night sky. An was identified with all the stars of the equatorial sky, Enlil with those of the northern sky, and Enki with those of the southern sky. The path of Enlil's celestial orbit was a continuous, symmetrical circle around the north celestial pole, but those of An and Enki were believed to intersect at various points. Enlil was associated with the constellation Boötes.", "title": "Iconography" }, { "paragraph_id": 13, "text": "The main source of information about the Sumerian creation myth is the prologue to the epic poem Gilgamesh, Enkidu, and the Netherworld (ETCSL 1.8.1.4), which briefly describes the process of creation: originally, there was only Nammu, the primeval sea. Then, Nammu gave birth to An, the sky, and Ki, the earth. An and Ki mated with each other, causing Ki to give birth to Enlil. Enlil separated An from Ki and carried off the earth as his domain, while An carried off the sky. Enlil marries his mother, Ki, and from this union all the plant and animal life on earth is produced.", "title": "Mythology" }, { "paragraph_id": 14, "text": "Enlil and Ninlil (ETCSL 1.2.1) is a nearly complete 152-line Sumerian poem describing the affair between Enlil and the goddess Ninlil. First, Ninlil's mother Nunbarshegunu instructs Ninlil to go bathe in the river. Ninlil goes to the river, where Enlil seduces her and impregnates her with their son, the moon-god Nanna. Because of this, Enlil is banished to Kur, the Sumerian underworld. Ninlil follows Enlil to the underworld, where he impersonates the \"man of the gate\". Ninlil demands to know where Enlil has gone, but Enlil, still impersonating the gatekeeper, refuses to answer. He then seduces Ninlil and impregnates her with Nergal, the god of death. The same scenario repeats, only this time Enlil instead impersonates the \"man of the river of the nether world, the man-devouring river\"; once again, he seduces Ninlil and impregnates her with the god Ninazu. Finally, Enlil impersonates the \"man of the boat\"; once again, he seduces Ninlil and impregnates her with Enbilulu, the \"inspector of the canals\".", "title": "Mythology" }, { "paragraph_id": 15, "text": "The story of Enlil's courtship with Ninlil is primarily a genealogical myth invented to explain the origins of the moon-god Nanna, as well as the various gods of the Underworld, but it is also, to some extent, a coming-of-age story describing Enlil and Ninlil's emergence from adolescence into adulthood. The story also explains Ninlil's role as Enlil's consort; in the poem, Ninlil declares, \"As Enlil is your master, so am I also your mistress!\" The story is also historically significant because, if the current interpretation of it is correct, it is the oldest known myth in which a god changes shape.", "title": "Mythology" }, { "paragraph_id": 16, "text": "In the Sumerian version of the flood story (ETCSL 1.7.4), the causes of the flood are unclear because the portion of the tablet recording the beginning of the story has been destroyed. Somehow, a mortal known as Ziusudra manages to survive the flood, likely through the help of the god Enki. The tablet begins in the middle of the description of the flood. The flood lasts for seven days and seven nights before it subsides. Then, Utu, the god of the Sun, emerges. Ziusudra opens a window in the side of the boat and falls down prostrate before the god. Next, he sacrifices an ox and a sheep in honor of Utu. At this point, the text breaks off again. When it picks back up, Enlil and An are in the midst of declaring Ziusudra immortal as an honor for having managed to survive the flood. The remaining portion of the tablet after this point is destroyed.", "title": "Mythology" }, { "paragraph_id": 17, "text": "In the later Akkadian version of the flood story, recorded in the Epic of Gilgamesh, Enlil actually causes the flood, seeking to annihilate every living thing on earth because the humans, who are vastly overpopulated, make too much noise and prevent him from sleeping. In this version of the story, the hero is Utnapishtim, who is warned ahead of time by Ea, the Babylonian equivalent of Enki, that the flood is coming. The flood lasts for seven days; when it ends, Ishtar, who had mourned the destruction of humanity, promises Utnapishtim that Enlil will never cause a flood again. When Enlil sees that Utnapishtim and his family have survived, he is outraged, but his son Ninurta speaks up in favor of humanity, arguing that, instead of causing floods, Enlil should simply ensure that humans never become overpopulated by reducing their numbers using wild animals and famines. Enlil goes into the boat; Utnapishtim and his wife bow before him. Enlil, now appeased, grants Utnapishtim immortality as a reward for his loyalty to the gods.", "title": "Mythology" }, { "paragraph_id": 18, "text": "Plucks at the roots, tears at the crown, the pickax spares the... plants; the pickax, its fate is decreed by father Enlil, the pickax is exalted.", "title": "Mythology" }, { "paragraph_id": 19, "text": "A nearly complete 108-line poem from the Early Dynastic Period (c. 2900 – 2350 BC) describes Enlil's invention of the mattock, a key agricultural pick, hoe, ax, or digging tool of the Sumerians. In the poem, Enlil conjures the mattock into existence and decrees its fate. The mattock is described as gloriously beautiful; it is made of pure gold and its head is carved from lapis lazuli. Enlil gives the tool over to the humans, who use it to build cities, subjugate their people, and pull up weeds. Enlil was believed to aid in the growth of plants.", "title": "Mythology" }, { "paragraph_id": 20, "text": "The Sumerian poem Enlil Chooses the Farmer-God (ETCSL 5.3.3) describes how Enlil, hoping \"to establish abundance and prosperity\", creates two gods Emesh and Enten, a shepherd and a farmer, respectively. The two gods argue and Emesh lays claim to Enten's position. They take the dispute before Enlil, who rules in favor of Enten; the two gods rejoice and reconcile.", "title": "Mythology" }, { "paragraph_id": 21, "text": "In the Sumerian poem Lugale (ETCSL 1.6.2), Enlil gives advice to his son, the god Ninurta, advising him on a strategy to slay the demon Asag. This advice is relayed to Ninurta by way of Sharur, his enchanted talking mace, which had been sent by Ninurta to the realm of the gods to seek counsel from Enlil directly.", "title": "Mythology" }, { "paragraph_id": 22, "text": "In the Old, Middle, and Late Babylonian myth of Anzû and the Tablet of Destinies, the Anzû, a giant, monstrous bird, betrays Enlil and steals the Tablet of Destinies, a sacred clay tablet belonging to Enlil that grants him his authority, while Enlil is preparing for a bath. The rivers dry up and the gods are stripped of their powers. The gods send Adad, Girra, and Shara to defeat the Anzû, but all of them fail. Finally, Ea proposes that the gods should send Ninurta, Enlil's son. Ninurta successfully defeats the Anzû and returns the Tablet of Destinies to his father. As a reward, Ninurta is a granted a prominent seat on the council of the gods.", "title": "Mythology" }, { "paragraph_id": 23, "text": "A badly damaged text from the Neo-Assyrian Period (911 — 612 BC) describes Marduk leading his army of Anunnaki into the sacred city of Nippur and causing a disturbance. The disturbance causes a flood, which forces the resident gods of Nippur under the leadership of Enlil to take shelter in the Eshumesha temple to Ninurta. Enlil is enraged at Marduk's transgression and orders the gods of Eshumesha to take Marduk and the other Anunnaki as prisoners. The Anunnaki are captured, but Marduk appoints his front-runner Mushteshirhablim to lead a revolt against the gods of Eshumesha and sends his messenger Neretagmil to alert Nabu, the god of literacy. When the Eshumesha gods hear Nabu speak, they come out of their temple to search for him. Marduk defeats the Eshumesha gods and takes 360 of them as prisoners of war, including Enlil himself. Enlil protests that the Eshumesha gods are innocent, so Marduk puts them on trial before the Anunnaki. The text ends with a warning from Damkianna (another name for Ninhursag) to the gods and to humanity, pleading them not to repeat the war between the Anunnaki and the gods of Eshumesha.", "title": "Mythology" } ]
Enlil, later known as Elil, is an ancient Mesopotamian god associated with wind, air, earth, and storms. He is first attested as the chief deity of the Sumerian pantheon, but he was later worshipped by the Akkadians, Babylonians, Assyrians, and Hurrians. Enlil's primary center of worship was the Ekur temple in the city of Nippur, which was believed to have been built by Enlil himself and was regarded as the "mooring-rope" of heaven and earth. He is also sometimes referred to in Sumerian texts as Nunamnir. According to one Sumerian hymn, Enlil himself was so holy that not even the other gods could look upon him. Enlil rose to prominence during the twenty-fourth century BC with the rise of Nippur. His cult fell into decline after Nippur was sacked by the Elamites in 1230 BC and he was eventually supplanted as the chief god of the Mesopotamian pantheon by the Babylonian national god Marduk. Enlil plays a vital role in the Sumerian creation myth; he separates An (heaven) from Ki (earth), thus making the world habitable for humans. In the Sumerian flood myth, Enlil rewards Ziusudra with immortality for having survived the flood and, in the Babylonian flood myth, Enlil is the cause of the flood himself, having sent the flood to exterminate the human race, who made too much noise and prevented him from sleeping. The myth of Enlil and Ninlil is about Enlil's serial seduction of the goddess Ninlil in various guises, resulting in the conception of the moon-god Nanna and the Underworld deities Nergal, Ninazu, and Enbilulu. Enlil was regarded as the inventor of the mattock and the patron of agriculture. Enlil also features prominently in several myths involving his son Ninurta, including Anzû and the Tablet of Destinies and Lugale.
2002-02-25T15:43:11Z
2023-12-22T23:42:51Z
[ "Template:Circa", "Template:Main", "Template:Notelist", "Template:Wikiquote", "Template:Commons category", "Template:Good article", "Template:Portal", "Template:Reflist", "Template:Refbegin", "Template:Sumerian mythology", "Template:Cite book", "Template:Cite encyclopedia", "Template:Citation", "Template:Short description", "Template:SpecialChars", "Template:Efn", "Template:Rquote", "Template:Cite web", "Template:Refend", "Template:Authority control", "Template:About", "Template:Redirect", "Template:Infobox deity", "Template:Snf", "Template:Sfn" ]
https://en.wikipedia.org/wiki/Enlil
9,630
Ecology
Ecology (from Ancient Greek οἶκος (oîkos) 'house', and -λογία (-logía) 'study of') is the study of the relationships among living organisms, including humans, and their physical environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere level. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history. Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes. Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology). The word ecology (German: Ökologie) was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory. Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value. The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Ecosystems are dynamic, they do not always follow a linear successional path, but they are always changing, sometimes rapidly and sometimes so slowly that it can take thousands of years for ecological processes to bring about certain successional stages of a forest. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame. The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes. System behaviors must first be arrayed into different levels of the organization. Behaviors corresponding to higher levels occur at slow rates. Conversely, lower organizational levels exhibit rapid rates. For example, individual tree leaves respond rapidly to momentary changes in light intensity, CO2 concentration, and the like. The growth of the tree responds more slowly and integrates these short-term changes. O'Neill et al. (1986) The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open with regard to broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales. To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties." Biodiversity refers to the variety of life and its processes. It includes the variety of living organisms, the genetic differences among them, the communities and ecosystems in which they occur, and the ecological and evolutionary processes that keep them functioning, yet ever-changing and adapting. Noss & Carpenter (1994) Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry. The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal." For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment. Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness." Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species. Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats." The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time. Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans. The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance. Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat. A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually starts with four variables: death, birth, immigration, and emigration. An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by: where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change. Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst: where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and α {\displaystyle \alpha } is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size ( d N ( t ) / d t {\displaystyle \mathrm {d} N(t)/\mathrm {d} t} ) will grow to approach equilibrium, where ( d N ( t ) / d t = 0 {\displaystyle \mathrm {d} N(t)/\mathrm {d} t=0} ), when the rates of increase and crowding are balanced, r / α {\displaystyle r/\alpha } . A common, analogous model fixes the equilibrium, r / α {\displaystyle r/\alpha } as K, which is known as the "carrying capacity." Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data." The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population. In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure. Community ecology examines how interactions among species and their environment affect the abundance, distribution and diversity of species within communities. Johnson & Stinchcomb (2007) Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals. These ecosystems, as we may call them, are of the most various kinds and sizes. They form one category of the multitudinous physical systems of the universe, which range from the universe as a whole down to the atom. Tansley (1935) Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria), The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity. A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. The larger interlocking pattern of food chains in an ecological community creates a complex food web. Food webs are a type of concept map or a heuristic device that is used to illustrate and study pathways of energy and material flows. Food webs are often limited relative to the real world. Complete empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from food web microcosm studies are extrapolated to larger systems. Feeding relations require extensive investigations into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems. Food webs exhibit principles of ecological emergence through the nature of trophic relationships: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Theoretical and empirical studies identify non-random emergent patterns of few strong and many weak linkages that explain how ecological communities remain stable over time. Food webs are composed of subgroups where members in a community are linked by strong interactions, and the weak interactions occur between these subgroups. This increases food web stability. Step by step lines or relations are drawn until a web of life is illustrated. A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'. Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing. Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores." A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed trophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability. Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied. Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'. "Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960. Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed." Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells. Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation. All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba. Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness. Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk." Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors. Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...". Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members. Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients. Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure. Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory. Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming. A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection. In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring. The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography. The history of life on Earth has been a history of interaction between living things and their surroundings. To a large extent, the physical form and the habits of the earth's vegetation and its animal life have been molded by the environment. Considering the whole span of earthly time, the opposite effect, in which life actually modifies its surroundings, has been relatively slight. Only within the moment of time represented by the present century has one species man acquired significant power to alter the nature of his world. Rachel Carson, "Silent Spring" Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century. The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth. Ecosystem management is not just about science nor is it simply an extension of traditional resource management; it offers a fundamental reframing of how humans may work with nature. Grumbine (1994) Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes. The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat. The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or holocoenotic system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem. Ecosystems are regularly confronted with natural environmental variations and disturbances over time and geographic space. A disturbance is any process that removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances occur over vastly different ranges in terms of magnitudes as well as distances and time periods, and are both the cause and product of natural fluctuations in death rates, species assemblages, and biomass densities within an ecological community. These disturbances create places of renewal where new directions emerge from the patchwork of natural experimentation and opportunity. Ecological resilience is a cornerstone theory in ecosystem management. Biodiversity fuels the resilience of ecosystems acting as a kind of regenerative insurance. Metabolism – the rate at which energy and material resources are taken up from the environment, transformed within an organism, and allocated to maintenance, growth and reproduction – is a fundamental physiological trait. Ernest et al. The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved. Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + hv → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the Great Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior. The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy. There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds. Wetland conditions such as shallow water, high plant productivity, and anaerobic substrates provide a suitable environment for important physical, biological, and chemical processes. Because of these processes, wetlands play a vital role in global nutrient and element cycles. Cronk & Fennessy (2001) Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water. The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra). Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations. Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems. Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s. Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., Pinus halepensis) cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems. Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils. Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry. The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm. In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles. By ecology, we mean the whole science of the relations of the organism to the environment including, in the broad sense, all the "conditions of existence". Thus, the theory of evolution explains the housekeeping relations of organisms mechanistically as the necessary consequences of effectual causes; and so forms the monistic groundwork of ecology. Ernst Haeckel (1866) Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died c. 425 BC), who described one of the earliest accounts of mutualism in his observation of "natural dentistry". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche. Nowhere can one see more clearly illustrated what may be called the sensibility of such an organic complex, – expressed by the fact that whatever affects any species belonging to it, must speedily have its influence of some sort upon the whole assemblage. He will thus be made to see the impossibility of studying any form completely, out of relation to the other forms, – the necessity for taking a comprehensive survey of the whole as a condition to a satisfactory understanding of any part. Stephen Forbes (1887) Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term "ecology" (German: Oekologie, Ökologie) was coined by Ernst Haeckel in his book Generelle Morphologie der Organismen (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy. Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the economy or polity of nature in The Origin of Species. Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous. From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to The Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication Natural History of Selborne by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in The Origin of Species. Evolutionary theory changed the way that researchers approached the ecological sciences. Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term "oekology" (which eventually morphed into home economics) in the U.S. as early as 1892. In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of scientific natural history. Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations. The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term "holism" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept. Around the same time, Charles Elton pioneered the concept of food chains in his classical book Animal Ecology. Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology. In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers. This whole chain of poisoning, then, seems to rest on a base of minute plants which must have been the original concentrators. But what of the opposite end of the food chain—the human being who, in probable ignorance of all this sequence of events, has rigged his fishing tackle, caught a string of fish from the waters of Clear Lake, and taken them home to fry for his supper? Rachel Carson (1962) Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s. In 1962, marine biologist and ecologist Rachel Carson's book Silent Spring helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management.
[ { "paragraph_id": 0, "text": "Ecology (from Ancient Greek οἶκος (oîkos) 'house', and -λογία (-logía) 'study of') is the study of the relationships among living organisms, including humans, and their physical environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere level. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history.", "title": "" }, { "paragraph_id": 1, "text": "Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes.", "title": "" }, { "paragraph_id": 2, "text": "Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology).", "title": "" }, { "paragraph_id": 3, "text": "The word ecology (German: Ökologie) was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory.", "title": "" }, { "paragraph_id": 4, "text": "Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value.", "title": "" }, { "paragraph_id": 5, "text": "The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Ecosystems are dynamic, they do not always follow a linear successional path, but they are always changing, sometimes rapidly and sometimes so slowly that it can take thousands of years for ecological processes to bring about certain successional stages of a forest. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 6, "text": "The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 7, "text": "System behaviors must first be arrayed into different levels of the organization. Behaviors corresponding to higher levels occur at slow rates. Conversely, lower organizational levels exhibit rapid rates. For example, individual tree leaves respond rapidly to momentary changes in light intensity, CO2 concentration, and the like. The growth of the tree responds more slowly and integrates these short-term changes.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 8, "text": "O'Neill et al. (1986)", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 9, "text": "The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open with regard to broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 10, "text": "To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that \"effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties.\"", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 11, "text": "Biodiversity refers to the variety of life and its processes. It includes the variety of living organisms, the genetic differences among them, the communities and ecosystems in which they occur, and the ecological and evolutionary processes that keep them functioning, yet ever-changing and adapting.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 12, "text": "Noss & Carpenter (1994)", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 13, "text": "Biodiversity (an abbreviation of \"biological diversity\") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 14, "text": "The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, \"habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal.\" For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 15, "text": "Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: \"the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes.\" The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a \"Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness.\"", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 16, "text": "Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 17, "text": "Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: \"organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats.\"", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 18, "text": "The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term \"niche construction\" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 19, "text": "Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 20, "text": "The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 21, "text": "Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 22, "text": "A primary law of population ecology is the Malthusian growth model which states, \"a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant.\" Simplified population models usually starts with four variables: death, birth, immigration, and emigration.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 23, "text": "An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by:", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 24, "text": "where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 25, "text": "Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst:", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 26, "text": "where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and α {\\displaystyle \\alpha } is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size ( d N ( t ) / d t {\\displaystyle \\mathrm {d} N(t)/\\mathrm {d} t} ) will grow to approach equilibrium, where ( d N ( t ) / d t = 0 {\\displaystyle \\mathrm {d} N(t)/\\mathrm {d} t=0} ), when the rates of increase and crowding are balanced, r / α {\\displaystyle r/\\alpha } . A common, analogous model fixes the equilibrium, r / α {\\displaystyle r/\\alpha } as K, which is known as the \"carrying capacity.\"", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 27, "text": "Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as \"several competing hypotheses are simultaneously confronted with the data.\"", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 28, "text": "The concept of metapopulations was defined in 1969 as \"a population of populations which go extinct locally and recolonize\". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 29, "text": "In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 30, "text": "Community ecology examines how interactions among species and their environment affect the abundance, distribution and diversity of species within communities.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 31, "text": "Johnson & Stinchcomb (2007)", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 32, "text": "Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 33, "text": "These ecosystems, as we may call them, are of the most various kinds and sizes. They form one category of the multitudinous physical systems of the universe, which range from the universe as a whole down to the atom.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 34, "text": "Tansley (1935)", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 35, "text": "Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria),", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 36, "text": "The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh (\"Man and Nature\"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 37, "text": "A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. The larger interlocking pattern of food chains in an ecological community creates a complex food web. Food webs are a type of concept map or a heuristic device that is used to illustrate and study pathways of energy and material flows.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 38, "text": "Food webs are often limited relative to the real world. Complete empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from food web microcosm studies are extrapolated to larger systems. Feeding relations require extensive investigations into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 39, "text": "Food webs exhibit principles of ecological emergence through the nature of trophic relationships: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Theoretical and empirical studies identify non-random emergent patterns of few strong and many weak linkages that explain how ecological communities remain stable over time. Food webs are composed of subgroups where members in a community are linked by strong interactions, and the weak interactions occur between these subgroups. This increases food web stability. Step by step lines or relations are drawn until a web of life is illustrated.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 40, "text": "A trophic level (from Greek troph, τροφή, trophē, meaning \"food\" or \"feeding\") is \"a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source.\" Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 41, "text": "Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 42, "text": "Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to \"reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction.\" Nonetheless, recent studies have shown that real trophic levels do exist, but \"above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores.\"", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 43, "text": "A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed trophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 44, "text": "Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied.", "title": "Levels, scope, and scale of organization" }, { "paragraph_id": 45, "text": "Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. \"What were wholes on one level become parts on a higher one.\" Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'.", "title": "Complexity" }, { "paragraph_id": 46, "text": "\"Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric.\" From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960.", "title": "Complexity" }, { "paragraph_id": 47, "text": "Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. \"New properties emerge because the components interact, not because the basic nature of the components is changed.\"", "title": "Complexity" }, { "paragraph_id": 48, "text": "Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells.", "title": "Complexity" }, { "paragraph_id": 49, "text": "Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation.", "title": "Relation to evolution" }, { "paragraph_id": 50, "text": "All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba.", "title": "Relation to evolution" }, { "paragraph_id": 51, "text": "Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness.", "title": "Relation to evolution" }, { "paragraph_id": 52, "text": "Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, \"[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk\" or \"[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk.\"", "title": "Relation to evolution" }, { "paragraph_id": 53, "text": "Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors.", "title": "Relation to evolution" }, { "paragraph_id": 54, "text": "Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. \"Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition.\" As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that \"...we must see the organism and environment as bound together in reciprocal specification and selection...\".", "title": "Relation to evolution" }, { "paragraph_id": 55, "text": "Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members.", "title": "Relation to evolution" }, { "paragraph_id": 56, "text": "Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients.", "title": "Relation to evolution" }, { "paragraph_id": 57, "text": "Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure.", "title": "Relation to evolution" }, { "paragraph_id": 58, "text": "Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory.", "title": "Relation to evolution" }, { "paragraph_id": 59, "text": "Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming.", "title": "Relation to evolution" }, { "paragraph_id": 60, "text": "A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.", "title": "Relation to evolution" }, { "paragraph_id": 61, "text": "In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring.", "title": "Relation to evolution" }, { "paragraph_id": 62, "text": "The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography.", "title": "Relation to evolution" }, { "paragraph_id": 63, "text": "The history of life on Earth has been a history of interaction between living things and their surroundings. To a large extent, the physical form and the habits of the earth's vegetation and its animal life have been molded by the environment. Considering the whole span of earthly time, the opposite effect, in which life actually modifies its surroundings, has been relatively slight. Only within the moment of time represented by the present century has one species man acquired significant power to alter the nature of his world.", "title": "Human ecology" }, { "paragraph_id": 64, "text": "Rachel Carson, \"Silent Spring\"", "title": "Human ecology" }, { "paragraph_id": 65, "text": "Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. \"Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three.\" The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century.", "title": "Human ecology" }, { "paragraph_id": 66, "text": "The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth.", "title": "Human ecology" }, { "paragraph_id": 67, "text": "Ecosystem management is not just about science nor is it simply an extension of traditional resource management; it offers a fundamental reframing of how humans may work with nature.", "title": "Human ecology" }, { "paragraph_id": 68, "text": "Grumbine (1994)", "title": "Human ecology" }, { "paragraph_id": 69, "text": "Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century \"will be the era of restoration in ecology\". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes.", "title": "Human ecology" }, { "paragraph_id": 70, "text": "The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment \"includes the physical world, the social world of human relations and the built world of human creation.\" The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat.", "title": "Relation to the environment" }, { "paragraph_id": 71, "text": "The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or holocoenotic system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem.", "title": "Relation to the environment" }, { "paragraph_id": 72, "text": "Ecosystems are regularly confronted with natural environmental variations and disturbances over time and geographic space. A disturbance is any process that removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances occur over vastly different ranges in terms of magnitudes as well as distances and time periods, and are both the cause and product of natural fluctuations in death rates, species assemblages, and biomass densities within an ecological community. These disturbances create places of renewal where new directions emerge from the patchwork of natural experimentation and opportunity. Ecological resilience is a cornerstone theory in ecosystem management. Biodiversity fuels the resilience of ecosystems acting as a kind of regenerative insurance.", "title": "Relation to the environment" }, { "paragraph_id": 73, "text": "Metabolism – the rate at which energy and material resources are taken up from the environment, transformed within an organism, and allocated to maintenance, growth and reproduction – is a fundamental physiological trait.", "title": "Relation to the environment" }, { "paragraph_id": 74, "text": "Ernest et al.", "title": "Relation to the environment" }, { "paragraph_id": 75, "text": "The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved.", "title": "Relation to the environment" }, { "paragraph_id": 76, "text": "Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + hv → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the Great Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior.", "title": "Relation to the environment" }, { "paragraph_id": 77, "text": "The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy.", "title": "Relation to the environment" }, { "paragraph_id": 78, "text": "There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds.", "title": "Relation to the environment" }, { "paragraph_id": 79, "text": "Wetland conditions such as shallow water, high plant productivity, and anaerobic substrates provide a suitable environment for important physical, biological, and chemical processes. Because of these processes, wetlands play a vital role in global nutrient and element cycles.", "title": "Relation to the environment" }, { "paragraph_id": 80, "text": "Cronk & Fennessy (2001)", "title": "Relation to the environment" }, { "paragraph_id": 81, "text": "Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water.", "title": "Relation to the environment" }, { "paragraph_id": 82, "text": "The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra).", "title": "Relation to the environment" }, { "paragraph_id": 83, "text": "Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations.", "title": "Relation to the environment" }, { "paragraph_id": 84, "text": "Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems.", "title": "Relation to the environment" }, { "paragraph_id": 85, "text": "Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s.", "title": "Relation to the environment" }, { "paragraph_id": 86, "text": "Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., Pinus halepensis) cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems.", "title": "Relation to the environment" }, { "paragraph_id": 87, "text": "Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils.", "title": "Relation to the environment" }, { "paragraph_id": 88, "text": "Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry.", "title": "Relation to the environment" }, { "paragraph_id": 89, "text": "The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm.", "title": "Relation to the environment" }, { "paragraph_id": 90, "text": "In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles.", "title": "Relation to the environment" }, { "paragraph_id": 91, "text": "By ecology, we mean the whole science of the relations of the organism to the environment including, in the broad sense, all the \"conditions of existence\". Thus, the theory of evolution explains the housekeeping relations of organisms mechanistically as the necessary consequences of effectual causes; and so forms the monistic groundwork of ecology.", "title": "History" }, { "paragraph_id": 92, "text": "Ernst Haeckel (1866)", "title": "History" }, { "paragraph_id": 93, "text": "Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died c. 425 BC), who described one of the earliest accounts of mutualism in his observation of \"natural dentistry\". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche.", "title": "History" }, { "paragraph_id": 94, "text": "Nowhere can one see more clearly illustrated what may be called the sensibility of such an organic complex, – expressed by the fact that whatever affects any species belonging to it, must speedily have its influence of some sort upon the whole assemblage. He will thus be made to see the impossibility of studying any form completely, out of relation to the other forms, – the necessity for taking a comprehensive survey of the whole as a condition to a satisfactory understanding of any part.", "title": "History" }, { "paragraph_id": 95, "text": "Stephen Forbes (1887)", "title": "History" }, { "paragraph_id": 96, "text": "", "title": "History" }, { "paragraph_id": 97, "text": "Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of \"terrestrial physics\". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term \"ecology\" (German: Oekologie, Ökologie) was coined by Ernst Haeckel in his book Generelle Morphologie der Organismen (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy.", "title": "History" }, { "paragraph_id": 98, "text": "Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the economy or polity of nature in The Origin of Species. Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous.", "title": "History" }, { "paragraph_id": 99, "text": "From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to The Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication Natural History of Selborne by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in The Origin of Species. Evolutionary theory changed the way that researchers approached the ecological sciences.", "title": "History" }, { "paragraph_id": 100, "text": "Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term \"oekology\" (which eventually morphed into home economics) in the U.S. as early as 1892.", "title": "History" }, { "paragraph_id": 101, "text": "In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of scientific natural history. Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations.", "title": "History" }, { "paragraph_id": 102, "text": "The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term \"holism\" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept. Around the same time, Charles Elton pioneered the concept of food chains in his classical book Animal Ecology. Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology.", "title": "History" }, { "paragraph_id": 103, "text": "In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers.", "title": "History" }, { "paragraph_id": 104, "text": "This whole chain of poisoning, then, seems to rest on a base of minute plants which must have been the original concentrators. But what of the opposite end of the food chain—the human being who, in probable ignorance of all this sequence of events, has rigged his fishing tackle, caught a string of fish from the waters of Clear Lake, and taken them home to fry for his supper?", "title": "History" }, { "paragraph_id": 105, "text": "Rachel Carson (1962)", "title": "History" }, { "paragraph_id": 106, "text": "Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s.", "title": "History" }, { "paragraph_id": 107, "text": "In 1962, marine biologist and ecologist Rachel Carson's book Silent Spring helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management.", "title": "History" }, { "paragraph_id": 108, "text": "", "title": "External links" } ]
Ecology is the study of the relationships among living organisms, including humans, and their physical environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere level. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history. Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes. Ecology has practical applications in conservation biology, wetland management, natural resource management, urban planning, community health, economics, basic and applied science, and human social interaction. The word ecology was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory. Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production, the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value.
2001-09-28T21:18:27Z
2023-12-04T19:02:32Z
[ "Template:Subject bar", "Template:Cref2", "Template:Lang-de", "Template:Anchor", "Template:Multiple image", "Template:Cols", "Template:Colend", "Template:Cnote2", "Template:Main", "Template:Portal", "Template:Cnote2 End", "Template:Reflist", "Template:Branches of biology", "Template:Earth", "Template:Infobox", "Template:TopicTOC-Biology", "Template:Quote box", "Template:Cite journal", "Template:Use dmy dates", "Template:Cite book", "Template:Environmental science", "Template:Short description", "Template:Etymology", "Template:Cn", "Template:Cite web", "Template:Rp", "Template:Biology nav", "Template:Authority control", "Template:Hatgrp", "Template:See also", "Template:Cnote2 Begin", "Template:Sister project links", "Template:Nature nav", "Template:Modelling ecosystems", "Template:Good article" ]
https://en.wikipedia.org/wiki/Ecology
9,631
Glossary of country dance terms
An alphabetic list of modern country dance terminology:
[ { "paragraph_id": 0, "text": "An alphabetic list of modern country dance terminology:", "title": "" } ]
An alphabetic list of modern country dance terminology: Active couple – for longways sets, the active couple is the couple nearest the head of the set within each minor set. There are always exactly as many active couples as minor sets. If the dance is "duple minor," this works out to every other couple, while in a "triple minor" it is every third couple. In older dances from the seventeenth and eighteenth centuries, the active couples do more complicated figures than the inactives, whence the name; however, this is not so usual in modern dances. Active couples may also be termed "first couple" or "the Ones," while inactives are "second couple/the Twos" and "third couple/the Threes." Arm right – couples link right arms and move forward in a circle, returning to their starting positions. Back to back – facing another person, move forward passing right shoulders and fall back to place passing left. May also start by passing left and falling back right. Called a do si do in contra dance. Balance – a single, generally found in pairs, as "balance forward and back." Becket formation – a 20th-century variation of the duple minor longways set. Each couple stands either on the men's line or the women's line, with the lady on the right. Within each minor set, one couple faces the left wall of the hall and the other the right wall, rather than facing the head or foot. There are no active or inactive couples. Progression is accomplished by each couple moving to their own left along their line at the end of each iteration of the dance; thus the couples on the men's line go up, while those on the women's line go down. This was originally a contra dance form but can sometimes be found in English country dance. Both hands – two dancers face each other and give hands right to left and left to right. Cast – turn outward and dance up or down outside the set, as directed. The instruction "cast off" is frequently synonymous with "cast down". Changes of right and left – like the circular hey, but dancers give hands as they pass. The number of changes is given first. Chassé – slipping step to right or left as directed. Circular hey – dancers face partners or along the line and pass right and left alternating a stated number of changes. Usually done without hands, the circular hey may also be done by more than two couples facing alternately and moving in opposite directions - usually to their original places. This name for the figure was invented by Cecil Sharp and does not appear in sources pre-1900. Nonetheless, some early country dances calling for heys have been interpreted in modern times using circular heys. In early dances, where the hey is called a "double hey", it works to interpret this as an oval hey, like the modern circular hey but adapted to the straight sides of a longways formation. Clockwise – in a ring, move to one's left. In a turn single turn to the right. Contrary – your contrary is not your partner. In Playford's original notation, this term meant the same thing that Corner means today. Corner – in a two-couple minor set, the dancer diagonally opposite one. The first man and the second woman are first corners, while the first woman and second man are second corners. In other dance formations, it has similar meanings. Counter-clockwise – the opposite of clockwise - in a ring, move right. In a turn single, turn to the left. Cross hands – face and give left to left and right to right. Cross over or pass – change places with another dancer moving forward and passing by the right shoulder, unless otherwise directed. Cross and go below – cross as above and go outside below one couple, ending improper. Double – four steps forward or back, closing the feet on the 4th step. Fall (back) – dance backwards. Figure of 8 – a weaving figure in which a moving couple crosses between a standing couple and casts around them in a figure 8 pattern. To do this once, ending in one's partner's place, is a half figure of 8; to do it twice, returning to one's own place, is a full figure of 8. The right of way in the cross has traditionally been given to the lady; some communities prefer to give it to whichever dancer is coming from the left-hand side. In a double figure of 8, the other couple does not stand still, but performs their own figure of 8 simultaneously; they begin with the cast and end with the cross to avoid collision. Forward – lead or move in the direction you are facing. Grand chain – a handing hey done in a circle of more than two couples. Gypsy – two dancers move around each other in a circular path while facing each other. Hands across – right or left hands are given to corners, and dancers move in the direction they face. In contra dance, instead of taking one's corner's hand, one grasps the wrist of the next dancer. Also known as a star right/left. Hands three, four etc. – the designated number of dancers form a ring and move around in the direction indicated, usually first to the left and back to the right. Head and foot – the head of a longways set is the end with the music; the foot is the other end. Toward the head is "up," and toward the foot is "down." Hey – a weaving figure in which dancers move in single file along a set track, passing one another on alternating sides. In Scottish country dance, the hey is known as the reel. "Hole in the Wall" cross – a type of cross. In a regular cross, the dancers walk past each other and turn upon reaching the other line; in a "Hole in the Wall" cross, they meet in the middle, make a brief half-turn without hands, and back into one another's place, maintaining eye contact the while. Named for "Hole in the Wall," a dance in which it appears. Honour – couples step forward and right, close, shift weight, and curtsey or bow, then (usually) repeat to their left. In the time of Playford's original manual, a woman's curtsey was similar to the modern one, but a man's honour kept the upper body upright and involved sliding the left leg forward while bending the right knee Improper – see proper. Ladies' chain – a figure in which ladies dance first with each other in the center of the set and then with the gentlemen on the sides. In its simplest form, two ladies begin in second corner positions. The ladies pass each other by right hand and turn with the gentlemen by left hand, approximately once around, to end with the ladies in each other's place and the gentlemen where they began. The figure can be extended to more couples in a ring, as long as the dancers in the ring are alternating between gentlemen and ladies. If the gentlemen turn the ladies only by left hand, that is an open ladies' chain; if they also place their right hands on the ladies' backs during the turn, that is a closed ladies' chain. In English country dance, both closed and open ladies' chains are to be found, and the gentlemen make a short cast up or down the set to meet the ladies; in contra dance, only the closed ladies' chain is done, and the gentlemen sidestep to meet the ladies. The men's chain is a simple gender reversal, but is a much rarer figure. Lead – join inside hands and walk in a certain direction. To lead up or down is to walk toward or away from the head of the set; to lead out is to walk away from the other line of dancers. Link – see set and link. Longways set – a line of couples dancing together. This is usually "longways for as many as will," indicating that any number of couples may join the longways set—although some dances require a three- or four-couple longways set. If the longways set is not restricted to three or four couples, it will be subdivided into minor sets of two or three couples each. "Mad Robin" figure – a figure in which one couple dances around their respective neighbours. Men take one step forward and then slide to the right passing in front of their neighbour, then step backward and slide left behind their neighbour. Conversely women take one step backward and then slide to the left passing behind their neighbour, then step forward and slide right in front of their neighbour. In one version, the dancer who is going outside the set at the moment casts out to begin that motion; in the other, the active couple maintains eye contact. The term Mad Robin comes from the name of the dance which originated the figure. A version involving all four dancers was developed for contra dancing and later readmitted into some modern English dances. Minor set – a longways set is subdivided into several minor sets. In a "duple minor" dance, every two couples form a minor set. In a "triple minor" dance, every three couples form a minor set. The active couple is always the couple in each minor set who are closest to the head. After every iteration of the dance, the progression will create new minor sets for the next iteration. Neighbour – the person you are standing beside, but not your partner. Opposite – the person you are facing, if you are not facing your partner. Poussette – two dancers face, give both hands and change places as a couple with two adjacent dancers. One pair moves a double toward one wall, the other toward the other wall; they shift up or down, respectively, and move into the other couple's place with another double. This completes a half-poussette; it is repeated for a whole poussette. In a draw poussette, each couple turns instead of reversing direction, so that the same dancer in each couple is always in the lead. Progression – the process by which every couple will eventually dance with every other couple in the set, if the dance is repeated enough times. In a "duple minor" dance with five couples dancing, for example, the couples are initially in this order: Active/Inactive/Active/Inactive/Out. This represents two minor sets and one couple who are "standing out" due to having no one to dance with. After one iteration of the dance, every active couple will have moved below the inactive couple in their minor set, which in the example would be thus: Inactive/Active/Inactive/Active/Out. For the next iteration, any inactive couple at the top will stand out, while any couple standing out will begin dancing as actives or inactives. So the next iteration would begin as follows: Out/Active/Inactive/Active/Inactive. The minor sets now contain couples A-D and couples C-E, while couple B is "standing out." Dances in other forms progress differently, though the "triple minor" progression is quite similar. Progression, double or triple – a longways dance has a double progression if the arrangement of couples into minor sets advances twice during one iteration of the dance instead of just once. A triple-progression dance advances thrice during one iteration. Proper – with the man on the left and the woman on the right, from the perspective of someone facing the music. Improper is the opposite. The terms carry no value judgment, but only indicate whether one is on one's "home" side. A dance in duple-minor longways form is termed "improper" if the active couples are improper by default; this is the exception in English country dance, but the rule in contra dance. Right and left – see changes of right and left. Set – a dancer steps right, closes with left foot and shifts weight to it, then steps back to the right foot (right-together-step); then repeats the process mirror-image (left-together-step). In some areas, such as the Society for Creative Anachronism, it is done starting to the left. It may be done in place or advancing. Often followed by a turn single. In Scottish country dance there are several variations; in contra dance its place is generally taken by a balance right and left. Not to be confused with terms indicating groups of dancers, like longways set or minor set. Set and link – a figure done by a pair of dancers and simultaneously by another pair of dancers who are facing them. Most commonly this means that the men do it facing the women, while the women do it facing the men. First, all dancers set; then the dancer on the left of each pair dances a turn single right, while also moving to the right, to end in his or her neighbor's place. Meanwhile, the dancer on the right of each pair casts to the left into his or her neighbor's place; thus the men have traded places with each other, and so have the women. This figure is most commonly found in Scottish country dance. Sicilian circle – a type of dance formation, roughly equivalent to a longways set rolled into a ring. Every couple stands along the line of a large circle, facing another couple; thus half of the couples face clockwise, while the other half face counterclockwise. Since, unlike the longways set, the Sicilian circle has no place for dancers to "stand out," Sicilian circle dances must be done by an even number of couples. The progression is similar to that of a "duple minor," but since there is nowhere for couples to reverse direction, every clockwise couple will only dance with the counterclockwise couples. Siding – two dancers go forward in four counts to meet side by side, then back in four counts to where they started the figure. As depicted by Feuillet, this is done right side by right side the first time, left by left the second time. In Cecil Sharp's reconstruction, the dancers pass by left shoulder, turn to face each other, then return along the same path, passing by right shoulder; this is then repeated. So-called Cecil Sharp siding is no longer considered historical, but is still used on its own merits. Standard siding is sometimes called Pat Shaw siding to distinguish it from Cecil Sharp siding. Single – two steps in any direction, closing feet on the second step. The second step tends to be interpreted as a closing action in which weight usually stays on the same foot as before, consistent with descriptions from Renaissance sources. Slipping circle – dancers take hands in a circle and chassé left or right. Star – see hands across. Straight hey for four – dancers face alternately, the two in the middle facing out. Dancers pass right shoulders on either end and weave to the end opposite. If the last pass at the end is by the right, the dancer turns right and reenters the line by the same shoulder; vice versa if the last pass was to the left. Dancers end in their original places. Straight hey for three – the first dancer faces the other two and passes right shoulders with the second dancer, left shoulder with the third - the other dancers moving and passing the indicated shoulder. On making the last pass, each dancer makes a whole turn on the end, bearing right if the last pass was by the right shoulder or left if last pass was by the left, and reenters the figure returning to place. Each dancer describes a figure of eight pattern. Swing – a turn with two hands, but moving faster and making more than one revolution. Several variants exist, including the ballroom swing and the Welsh swing. Track figure – a generic term for any composite figure where the dancers involved travel within the set. An example track figure might be "Ones cast around the Twos, cross, cast around the Threes, and lead back up to place." The figure of 8 would be considered a track figure if it were not common enough to have its own name. Turn both-hands – face, give both hands, and make a complete circular, clockwise turn to place. Turn by right or left – dancers join right hands and turn around, separate, and fall to places. Turn single – dancers turn around in four steps. Turn single right is a clockwise turn; turn single left is a counterclockwise turn. May involve a backward motion, as after a set advancing. Up a double and back – common combination in which dancers, usually having linked hands in a line, advance a double and then retire another double.
2001-08-03T22:52:56Z
2023-11-05T11:45:56Z
[ "Template:Short description", "Template:Culture of England", "Template:Reflist", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Glossary_of_country_dance_terms
9,632
Ecosystem
An ecosystem (or ecological system) consists of all the organisms and the physical environment with which they interact. These biotic and abiotic components are linked together through nutrient cycles and energy flows. Energy enters the system through photosynthesis and is incorporated into plant tissue. By feeding on plants and on one another, animals play an important role in the movement of matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes. Ecosystems are controlled by external and internal factors. External factors such as climate, parent material which forms the soil and topography, control the overall structure of an ecosystem but are not themselves influenced by the ecosystem. Internal factors are controlled, for example, by decomposition, root competition, shading, disturbance, succession, and the types of species present. While the resource inputs are generally controlled by external processes, the availability of these resources within the ecosystem is controlled by internal factors. Therefore, internal factors not only control ecosystem processes but are also controlled by them. Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere. Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered "collapsed". Ecosystem restoration can contribute to achieving the Sustainable Development Goals. An ecosystem (or ecological system) consists of all the organisms and the abiotic pools (or physical environment) with which they interact. The biotic and abiotic components are linked together through nutrient cycles and energy flows. "Ecosystem processes" are the transfers of energy and materials from one pool to another. Ecosystem processes are known to "take place at a wide range of scales". Therefore, the correct scale of study depends on the question asked. The term "ecosystem" was first used in 1935 in a publication by British ecologist Arthur Tansley. The term was coined by Arthur Roy Clapham, who came up with the word at Tansley's request. Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment. He later refined the term, describing it as "The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment". Tansley regarded ecosystems not simply as natural units, but as "mental isolates". Tansley later defined the spatial extent of ecosystems using the term "ecotope". G. Evelyn Hutchinson, a limnologist who was a contemporary of Tansley's, combined Charles Elton's ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limited algal production. This would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothers Howard T. Odum and Eugene P. Odum, further developed a "systems approach" to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems. Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales, climate is the factor that "most strongly determines ecosystem processes and structure". Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem. Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside. Other external factors that play an important role in ecosystem functioning include time and potential biota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present. The introduction of non-native species can cause substantial shifts in ecosystem function. Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them. While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading. Other factors like disturbance, succession or the types of species present are also internal factors. Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect. Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP). About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance. The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP). Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis. Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration. The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system. Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem. Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion. In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level. The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains. The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted. Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it). Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones. Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition. Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material. The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources. Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself. Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available. Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth. Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances. When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance. Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as "a relatively discrete event in time that removes plant biomass". This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply." The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery. More severe and more frequent disturbance result in longer recovery times. From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies of cultivation which ceased in 1850 when large areas were reverted to forests. Another example is the methane production in eastern Siberian lakes that is controlled by organic matter which accumulated during the Pleistocene. Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer. Most terrestrial ecosystems are nitrogen-limited in the short term making nitrogen cycling an important control on ecosystem production. Over the long term, phosphorus availability can also be critical. Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium. Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur. Micronutrients required by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium. Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants. Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust. Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems. When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification. Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification. Mycorrhizal fungi which are symbiotic with plant roots, use carbohydrates supplied by the plants and in return transfer phosphorus and nitrogen compounds back to the plant roots. This is an important pathway of organic nitrogen transfer from dead organic matter to plants. This mechanism may contribute to more than 70 Tg of annually assimilated plant nitrogen, thereby playing a critical role in global nutrient cycling and ecosystem function. Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics). Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter. Biodiversity plays an important role in ecosystem functioning. Ecosystem processes are driven by the species in an ecosystem, the nature of the individual species, and the relative abundance of organisms among these species. Ecosystem processes are the net effect of the actions of individual organisms as they interact with their environment. Ecological theory suggests that in order to coexist, species must have some level of limiting similarity—they must be different from one another in some fundamental way, otherwise, one species would competitively exclude the other. Despite this, the cumulative effect of additional species in an ecosystem is not linear: additional species may enhance nitrogen retention, for example. However, beyond some level of species richness, additional species may have little additive effect unless they differ substantially from species already present. This is the case for example for exotic species. The addition (or loss) of species that are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem. An ecosystem engineer is any organism that creates, significantly modifies, maintains or destroys a habitat. Ecosystem ecology is the "study of the interactions between organisms and their environment as an integrated system". The size of ecosystems can range up to ten orders of magnitude, from the surface layers of rocks to the surface of the planet. The Hubbard Brook Ecosystem Study started in 1963 to study the White Mountains in New Hampshire. It was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem. Long-term research at the site led to the discovery of acid rain in North America in 1972. Researchers documented the depletion of soil cations (especially calcium) over the next several decades. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to studying microcosms or mesocosms (simplified representations of ecosystems). American ecologist Stephen R. Carpenter has argued that microcosm experiments can be "irrelevant and diversionary" if they are not carried out in conjunction with field studies done at the ecosystem scale. In such cases, microcosm experiments may fail to accurately predict ecosystem-level dynamics. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Biomes are always defined at a very general level. Ecosystems can be described at levels that range from very general (in which case the names are sometimes the same as those of biomes) to very specific, such as "wet coastal needle-leafed forests". Biomes vary due to global variations in climate. Biomes are often defined by their structure: at a general level, for example, tropical forests, temperate grasslands, and arctic tundra. There can be any degree of subcategories among ecosystem types that comprise a biome, e.g., needle-leafed boreal forests or wet tropical forests. Although ecosystems are most commonly categorized by their structure and geography, there are also other ways to categorize and classify ecosystems such as by their level of human impact (see anthropogenic biome), or by their integration with social processes or technological processes or their novelty (e.g. novel ecosystem). Each of these taxonomies of ecosystems tends to emphasize different structural or functional properties. None of these is the "best" classification. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines, and a function-based typology has been proposed to leverage the strengths of these different approaches into a unified system. Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate. Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. They also include less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. While material from the ecosystem had traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted. The Millennium Ecosystem Assessment is an international synthesis by over 1000 of the world's leading biological scientists that analyzes the state of the Earth's ecosystems and provides summaries and guidelines for decision-makers. The report identified four major categories of ecosystem services: provisioning, regulating, cultural and supporting services. It concludes that human activity is having a significant and escalating impact on the biodiversity of the world ecosystems, reducing both their resilience and biocapacity. The report refers to natural systems as humanity's "life-support system", providing essential ecosystem services. The assessment measures 24 ecosystem services and concludes that only four have shown improvement over the last 50 years, 15 are in serious decline, and five are in a precarious condition. The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) is an intergovernmental organization established to improve the interface between science and policy on issues of biodiversity and ecosystem services. It is intended to serve a similar role to the Intergovernmental Panel on Climate Change. The conceptual framework of the IPBES includes six primary interlinked elements: nature, nature's benefits to people, anthropogenic assets, institutions and governance systems and other indirect drivers of change, direct drivers of change, and good quality of life. Ecosystem services are limited and also threatened by human activities. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species. As human population and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the human ecological footprint. Natural resources are vulnerable and limited. The environmental impacts of anthropogenic actions are becoming more apparent. Problems for all ecosystems include: environmental pollution, climate change and biodiversity loss. For terrestrial ecosystems further threats include air pollution, soil degradation, and deforestation. For aquatic ecosystems threats also include unsustainable exploitation of marine resources (for example overfishing), marine pollution, microplastics pollution, galamsey (Illegal Artisanal Small Scale mining), the effects of climate change on oceans (e.g. warming and acidification), and building on coastal areas. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered collapsed (see also IUCN Red List of Ecosystems). Ecosystem collapse could be reversible and in this way differs from species extinction. Quantitative assessments of the risk of collapse are used as measures of conservation status and trends. When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management. Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions: A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem; "intergenerational sustainability [is] a precondition for management, not an afterthought". While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems (see, for example, agroecosystem and close to nature forestry). Integrated conservation and development projects (ICDPs) aim to address conservation and human livelihood (sustainable development) concerns in developing countries together, rather than separately as was often done in the past. The following articles are types of ecosystems for particular types of regions or zones: Ecosystem instances in specific regions of the world:
[ { "paragraph_id": 0, "text": "An ecosystem (or ecological system) consists of all the organisms and the physical environment with which they interact. These biotic and abiotic components are linked together through nutrient cycles and energy flows. Energy enters the system through photosynthesis and is incorporated into plant tissue. By feeding on plants and on one another, animals play an important role in the movement of matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes.", "title": "" }, { "paragraph_id": 1, "text": "Ecosystems are controlled by external and internal factors. External factors such as climate, parent material which forms the soil and topography, control the overall structure of an ecosystem but are not themselves influenced by the ecosystem. Internal factors are controlled, for example, by decomposition, root competition, shading, disturbance, succession, and the types of species present. While the resource inputs are generally controlled by external processes, the availability of these resources within the ecosystem is controlled by internal factors. Therefore, internal factors not only control ecosystem processes but are also controlled by them.", "title": "" }, { "paragraph_id": 2, "text": "Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere.", "title": "" }, { "paragraph_id": 3, "text": "Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the \"tangible, material products\" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally \"improvements in the condition or location of things of value\". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered \"collapsed\". Ecosystem restoration can contribute to achieving the Sustainable Development Goals.", "title": "" }, { "paragraph_id": 4, "text": "An ecosystem (or ecological system) consists of all the organisms and the abiotic pools (or physical environment) with which they interact. The biotic and abiotic components are linked together through nutrient cycles and energy flows.", "title": "Definition" }, { "paragraph_id": 5, "text": "\"Ecosystem processes\" are the transfers of energy and materials from one pool to another. Ecosystem processes are known to \"take place at a wide range of scales\". Therefore, the correct scale of study depends on the question asked.", "title": "Definition" }, { "paragraph_id": 6, "text": "The term \"ecosystem\" was first used in 1935 in a publication by British ecologist Arthur Tansley. The term was coined by Arthur Roy Clapham, who came up with the word at Tansley's request. Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment. He later refined the term, describing it as \"The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment\". Tansley regarded ecosystems not simply as natural units, but as \"mental isolates\". Tansley later defined the spatial extent of ecosystems using the term \"ecotope\".", "title": "Definition" }, { "paragraph_id": 7, "text": "G. Evelyn Hutchinson, a limnologist who was a contemporary of Tansley's, combined Charles Elton's ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limited algal production. This would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothers Howard T. Odum and Eugene P. Odum, further developed a \"systems approach\" to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems.", "title": "Definition" }, { "paragraph_id": 8, "text": "Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales, climate is the factor that \"most strongly determines ecosystem processes and structure\". Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem.", "title": "Processes" }, { "paragraph_id": 9, "text": "Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside.", "title": "Processes" }, { "paragraph_id": 10, "text": "Other external factors that play an important role in ecosystem functioning include time and potential biota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present. The introduction of non-native species can cause substantial shifts in ecosystem function.", "title": "Processes" }, { "paragraph_id": 11, "text": "Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them. While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading. Other factors like disturbance, succession or the types of species present are also internal factors.", "title": "Processes" }, { "paragraph_id": 12, "text": "Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect.", "title": "Processes" }, { "paragraph_id": 13, "text": "Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP). About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance. The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP). Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.", "title": "Processes" }, { "paragraph_id": 14, "text": "Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration. The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, the vast majority of the net primary production ends up being broken down by decomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system.", "title": "Processes" }, { "paragraph_id": 15, "text": "Ecosystem respiration is the sum of respiration by all living organisms (plants, animals, and decomposers) in the ecosystem. Net ecosystem production is the difference between gross primary production (GPP) and ecosystem respiration. In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem.", "title": "Processes" }, { "paragraph_id": 16, "text": "Energy can also be released from an ecosystem through disturbances such as wildfire or transferred to other ecosystems (e.g., from a forest to a stream to a lake) by erosion.", "title": "Processes" }, { "paragraph_id": 17, "text": "In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher than in terrestrial systems. In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producers—herbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level.", "title": "Processes" }, { "paragraph_id": 18, "text": "The sequence of consumption—from plant to herbivore, to carnivore—forms a food chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains.", "title": "Processes" }, { "paragraph_id": 19, "text": "The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted.", "title": "Processes" }, { "paragraph_id": 20, "text": "Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it). Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and less important in dry ones.", "title": "Processes" }, { "paragraph_id": 21, "text": "Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition. Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material.", "title": "Processes" }, { "paragraph_id": 22, "text": "The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break down lignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources.", "title": "Processes" }, { "paragraph_id": 23, "text": "Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself. Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available.", "title": "Processes" }, { "paragraph_id": 24, "text": "Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth.", "title": "Processes" }, { "paragraph_id": 25, "text": "Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances. When a perturbation occurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Resilience thinking also includes humanity as an integral part of the biosphere where we are dependent on ecosystem services for our survival and must build and maintain their natural capacities to withstand shocks and disturbances. Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the faster recovery of a community from disturbance.", "title": "Processes" }, { "paragraph_id": 26, "text": "Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as \"a relatively discrete event in time that removes plant biomass\". This can range from herbivore outbreaks, treefalls, fires, hurricanes, floods, glacial advances, to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a \"directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply.\"", "title": "Processes" }, { "paragraph_id": 27, "text": "The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery. More severe and more frequent disturbance result in longer recovery times.", "title": "Processes" }, { "paragraph_id": 28, "text": "From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies of cultivation which ceased in 1850 when large areas were reverted to forests. Another example is the methane production in eastern Siberian lakes that is controlled by organic matter which accumulated during the Pleistocene.", "title": "Processes" }, { "paragraph_id": 29, "text": "Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer. Most terrestrial ecosystems are nitrogen-limited in the short term making nitrogen cycling an important control on ecosystem production. Over the long term, phosphorus availability can also be critical.", "title": "Processes" }, { "paragraph_id": 30, "text": "Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium. Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur. Micronutrients required by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium.", "title": "Processes" }, { "paragraph_id": 31, "text": "Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants. Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust. Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems.", "title": "Processes" }, { "paragraph_id": 32, "text": "When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification. Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification.", "title": "Processes" }, { "paragraph_id": 33, "text": "Mycorrhizal fungi which are symbiotic with plant roots, use carbohydrates supplied by the plants and in return transfer phosphorus and nitrogen compounds back to the plant roots. This is an important pathway of organic nitrogen transfer from dead organic matter to plants. This mechanism may contribute to more than 70 Tg of annually assimilated plant nitrogen, thereby playing a critical role in global nutrient cycling and ecosystem function.", "title": "Processes" }, { "paragraph_id": 34, "text": "Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics). Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter.", "title": "Processes" }, { "paragraph_id": 35, "text": "Biodiversity plays an important role in ecosystem functioning. Ecosystem processes are driven by the species in an ecosystem, the nature of the individual species, and the relative abundance of organisms among these species. Ecosystem processes are the net effect of the actions of individual organisms as they interact with their environment. Ecological theory suggests that in order to coexist, species must have some level of limiting similarity—they must be different from one another in some fundamental way, otherwise, one species would competitively exclude the other. Despite this, the cumulative effect of additional species in an ecosystem is not linear: additional species may enhance nitrogen retention, for example. However, beyond some level of species richness, additional species may have little additive effect unless they differ substantially from species already present. This is the case for example for exotic species.", "title": "Processes" }, { "paragraph_id": 36, "text": "The addition (or loss) of species that are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem.", "title": "Processes" }, { "paragraph_id": 37, "text": "An ecosystem engineer is any organism that creates, significantly modifies, maintains or destroys a habitat.", "title": "Processes" }, { "paragraph_id": 38, "text": "Ecosystem ecology is the \"study of the interactions between organisms and their environment as an integrated system\". The size of ecosystems can range up to ten orders of magnitude, from the surface layers of rocks to the surface of the planet.", "title": "Study approaches" }, { "paragraph_id": 39, "text": "The Hubbard Brook Ecosystem Study started in 1963 to study the White Mountains in New Hampshire. It was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem. Long-term research at the site led to the discovery of acid rain in North America in 1972. Researchers documented the depletion of soil cations (especially calcium) over the next several decades.", "title": "Study approaches" }, { "paragraph_id": 40, "text": "Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to studying microcosms or mesocosms (simplified representations of ecosystems). American ecologist Stephen R. Carpenter has argued that microcosm experiments can be \"irrelevant and diversionary\" if they are not carried out in conjunction with field studies done at the ecosystem scale. In such cases, microcosm experiments may fail to accurately predict ecosystem-level dynamics.", "title": "Study approaches" }, { "paragraph_id": 41, "text": "Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Biomes are always defined at a very general level. Ecosystems can be described at levels that range from very general (in which case the names are sometimes the same as those of biomes) to very specific, such as \"wet coastal needle-leafed forests\".", "title": "Study approaches" }, { "paragraph_id": 42, "text": "Biomes vary due to global variations in climate. Biomes are often defined by their structure: at a general level, for example, tropical forests, temperate grasslands, and arctic tundra. There can be any degree of subcategories among ecosystem types that comprise a biome, e.g., needle-leafed boreal forests or wet tropical forests. Although ecosystems are most commonly categorized by their structure and geography, there are also other ways to categorize and classify ecosystems such as by their level of human impact (see anthropogenic biome), or by their integration with social processes or technological processes or their novelty (e.g. novel ecosystem). Each of these taxonomies of ecosystems tends to emphasize different structural or functional properties. None of these is the \"best\" classification.", "title": "Study approaches" }, { "paragraph_id": 43, "text": "Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines, and a function-based typology has been proposed to leverage the strengths of these different approaches into a unified system.", "title": "Study approaches" }, { "paragraph_id": 44, "text": "Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 45, "text": "Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the \"tangible, material products\" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. They also include less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 46, "text": "Ecosystem services, on the other hand, are generally \"improvements in the condition or location of things of value\". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. While material from the ecosystem had traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 47, "text": "The Millennium Ecosystem Assessment is an international synthesis by over 1000 of the world's leading biological scientists that analyzes the state of the Earth's ecosystems and provides summaries and guidelines for decision-makers. The report identified four major categories of ecosystem services: provisioning, regulating, cultural and supporting services. It concludes that human activity is having a significant and escalating impact on the biodiversity of the world ecosystems, reducing both their resilience and biocapacity. The report refers to natural systems as humanity's \"life-support system\", providing essential ecosystem services. The assessment measures 24 ecosystem services and concludes that only four have shown improvement over the last 50 years, 15 are in serious decline, and five are in a precarious condition.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 48, "text": "The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) is an intergovernmental organization established to improve the interface between science and policy on issues of biodiversity and ecosystem services. It is intended to serve a similar role to the Intergovernmental Panel on Climate Change. The conceptual framework of the IPBES includes six primary interlinked elements: nature, nature's benefits to people, anthropogenic assets, institutions and governance systems and other indirect drivers of change, direct drivers of change, and good quality of life.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 49, "text": "Ecosystem services are limited and also threatened by human activities. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 50, "text": "As human population and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the human ecological footprint. Natural resources are vulnerable and limited. The environmental impacts of anthropogenic actions are becoming more apparent. Problems for all ecosystems include: environmental pollution, climate change and biodiversity loss. For terrestrial ecosystems further threats include air pollution, soil degradation, and deforestation. For aquatic ecosystems threats also include unsustainable exploitation of marine resources (for example overfishing), marine pollution, microplastics pollution, galamsey (Illegal Artisanal Small Scale mining), the effects of climate change on oceans (e.g. warming and acidification), and building on coastal areas.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 51, "text": "Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 52, "text": "These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered collapsed (see also IUCN Red List of Ecosystems). Ecosystem collapse could be reversible and in this way differs from species extinction. Quantitative assessments of the risk of collapse are used as measures of conservation status and trends.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 53, "text": "When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management. Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions: A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem; \"intergenerational sustainability [is] a precondition for management, not an afterthought\". While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems (see, for example, agroecosystem and close to nature forestry).", "title": "Human interactions with ecosystems" }, { "paragraph_id": 54, "text": "Integrated conservation and development projects (ICDPs) aim to address conservation and human livelihood (sustainable development) concerns in developing countries together, rather than separately as was often done in the past.", "title": "Human interactions with ecosystems" }, { "paragraph_id": 55, "text": "The following articles are types of ecosystems for particular types of regions or zones:", "title": "See also" }, { "paragraph_id": 56, "text": "Ecosystem instances in specific regions of the world:", "title": "See also" } ]
An ecosystem consists of all the organisms and the physical environment with which they interact. These biotic and abiotic components are linked together through nutrient cycles and energy flows. Energy enters the system through photosynthesis and is incorporated into plant tissue. By feeding on plants and on one another, animals play an important role in the movement of matter and energy through the system. They also influence the quantity of plant and microbial biomass present. By breaking down dead organic matter, decomposers release carbon back to the atmosphere and facilitate nutrient cycling by converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes. Ecosystems are controlled by external and internal factors. External factors such as climate, parent material which forms the soil and topography, control the overall structure of an ecosystem but are not themselves influenced by the ecosystem. Internal factors are controlled, for example, by decomposition, root competition, shading, disturbance, succession, and the types of species present. While the resource inputs are generally controlled by external processes, the availability of these resources within the ecosystem is controlled by internal factors. Therefore, internal factors not only control ecosystem processes but are also controlled by them. Ecosystems are dynamic entities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed its ecological resilience. Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation. Biomes are general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems. Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere. Ecosystems provide a variety of goods and services upon which people depend. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, and medicinal plants. Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such as soil loss, air and water pollution, habitat fragmentation, water diversion, fire suppression, and introduced species and invasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation of abiotic conditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered "collapsed". Ecosystem restoration can contribute to achieving the Sustainable Development Goals.
2001-09-27T10:29:05Z
2023-12-18T09:01:22Z
[ "Template:Rp", "Template:Reflist", "Template:Div col end", "Template:Anchor", "Template:Expand list", "Template:Scholia-inline", "Template:Nature nav", "Template:Systems", "Template:See also", "Template:Further", "Template:Clear", "Template:Short description", "Template:Pp-vandalism", "Template:TopicTOC-Biology", "Template:Wikivoyage-inline", "Template:Authority control", "Template:Cite journal", "Template:Open access", "Template:Webarchive", "Template:Commons category-inline", "Template:Center", "Template:Div col", "Template:Hatgrp", "Template:Multiple image", "Template:Portal", "Template:Main cat", "Template:Cite book", "Template:Cite web", "Template:Earth", "Template:TOC limit", "Template:Main", "Template:Convert", "Template:Wiktionary-inline", "Template:Modelling ecosystems", "Template:Composition (Biology)" ]
https://en.wikipedia.org/wiki/Ecosystem
9,633
E (mathematical constant)
The number e, also known as Euler's number, is a mathematical constant approximately equal to 2.71828 that can be characterized in many ways. It is the base of natural logarithms. It is the limit of (1 + 1/n) as n approaches infinity, an expression that arises in the computation of compound interest. It can also be calculated as the sum of the infinite series It is also the unique positive number a such that the graph of the function y = a has a slope of 1 at x = 0. The (natural) exponential function f(x) = e is the unique function f that equals its own derivative and satisfies the equation f(0) = 1; hence one can also define e as f(1). The natural logarithm, or logarithm to base e, is the inverse function to the natural exponential function. The natural logarithm of a number k > 1 can be defined directly as the area under the curve y = 1/x between x = 1 and x = k, in which case e is the value of k for which this area equals 1 (see image). There are various other characterizations. The number e is sometimes called Euler's number (not to be confused with Euler's constant γ {\displaystyle \gamma } )—after the Swiss mathematician Leonhard Euler—or Napier's constant—after John Napier. The constant was discovered by the Swiss mathematician Jacob Bernoulli while studying compound interest. The number e is of great importance in mathematics, alongside 0, 1, π, and i. All five appear in one formulation of Euler's identity e i π + 1 = 0 {\displaystyle e^{i\pi }+1=0} and play important and recurring roles across mathematics. Like the constant π, e is irrational (it cannot be represented as a ratio of integers) and transcendental (it is not a root of any non-zero polynomial with rational coefficients). To 40 decimal places, the value of e is: The first references to the constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms to the base e {\displaystyle e} . It is assumed that the table was written by William Oughtred. The constant itself was introduced by Jacob Bernoulli in 1683, for solving the problem of continuous compounding of interest. In his solution, the constant e occurs as the limit where n represents the number of intervals in a year on which the compound interest is evaluated (for example, n = 12 {\displaystyle n=12} for monthly compounding). The first symbol used for this constant was the letter b by Gottfried Leibniz in letters to Christiaan Huygens in 1690 and 1691. Leonhard Euler started to use the letter e for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and in a letter to Christian Goldbach on 25 November 1731. The first appearance of e in a printed publication was in Euler's Mechanica (1736). It is unknown why Euler chose the letter e. Although some researchers used the letter c in the subsequent years, the letter e was more common and eventually became standard. Jacob Bernoulli discovered this constant in 1683, while studying a question about compound interest: An account starts with $1.00 and pays 100 percent interest per year. If the interest is credited once, at the end of the year, the value of the account at year-end will be $2.00. What happens if the interest is computed and credited more frequently during the year? If the interest is credited twice in the year, the interest rate for each 6 months will be 50%, so the initial $1 is multiplied by 1.5 twice, yielding $1.00 × 1.5 = $2.25 at the end of the year. Compounding quarterly yields $1.00 × 1.25 = $2.44140625, and compounding monthly yields $1.00 × (1 + 1/12) = $2.613035.... If there are n compounding intervals, the interest for each interval will be 100%/n and the value at the end of the year will be $1.00 × (1 + 1/n). Bernoulli noticed that this sequence approaches a limit (the force of interest) with larger n and, thus, smaller compounding intervals. Compounding weekly (n = 52) yields $2.692596..., while compounding daily (n = 365) yields $2.714567... (approximately two cents more). The limit as n grows large is the number that came to be known as e. That is, with continuous compounding, the account value will reach $2.718281828... More generally, an account that starts at $1 and offers an annual interest rate of R will, after t years, yield e dollars with continuous compounding. (Note here that R is the decimal equivalent of the rate of interest expressed as a percentage, so for 5% interest, R = 5/100 = 0.05.) The number e itself also has applications in probability theory, in a way that is not obviously related to exponential growth. Suppose that a gambler plays a slot machine that pays out with a probability of one in n and plays it n times. As n increases, the probability that gambler will lose all n bets approaches 1/e. For n = 20, this is already approximately 1/2.789509.... This is an example of a Bernoulli trial process. Each time the gambler plays the slots, there is a one in n chance of winning. Playing n times is modeled by the binomial distribution, which is closely related to the binomial theorem and Pascal's triangle. The probability of winning k times out of n trials is: In particular, the probability of winning zero times (k = 0) is The limit of the above expression, as n tends to infinity, is precisely 1/e. The normal distribution with zero mean and unit standard deviation is known as the standard normal distribution, given by the probability density function The constraint of unit variance (and thus also unit standard deviation) results in the 1/2 in the exponent, and the constraint of unit total area under the curve ϕ ( x ) {\displaystyle \phi (x)} results in the factor 1 / 2 π {\displaystyle \textstyle 1/{\sqrt {2\pi }}} . This function is symmetric around x = 0, where it attains its maximum value 1 / 2 π {\displaystyle \textstyle 1/{\sqrt {2\pi }}} , and has inflection points at x = ±1. Another application of e, also discovered in part by Jacob Bernoulli along with Pierre Remond de Montmort, is in the problem of derangements, also known as the hat check problem: n guests are invited to a party and, at the door, the guests all check their hats with the butler, who in turn places the hats into n boxes, each labelled with the name of one guest. But the butler has not asked the identities of the guests, and so puts the hats into boxes selected at random. The problem of de Montmort is to find the probability that none of the hats gets put into the right box. This probability, denoted by p n {\displaystyle p_{n}\!} , is: As n tends to infinity, pn approaches 1/e. Furthermore, the number of ways the hats can be placed into the boxes so that none of the hats are in the right box is n!/e, rounded to the nearest integer, for every positive n. The maximum value of x x {\displaystyle {\sqrt[{x}]{x}}} occurs at x = e {\displaystyle x=e} . Equivalently, for any value of the base b > 1, it is the case that the maximum value of x − 1 log b x {\displaystyle x^{-1}\log _{b}x} occurs at x = e {\displaystyle x=e} (Steiner's problem, discussed below). This is useful in the problem of a stick of length L that is broken into n equal parts. The value of n that maximizes the product of the lengths is then either The quantity x − 1 log b x {\displaystyle x^{-1}\log _{b}x} is also a measure of information gleaned from an event occurring with probability 1 / x {\displaystyle 1/x} , so that essentially the same optimal division appears in optimal planning problems like the secretary problem. The number e occurs naturally in connection with many problems involving asymptotics. An example is Stirling's formula for the asymptotics of the factorial function, in which both the numbers e and π appear: As a consequence, The principal motivation for introducing the number e, particularly in calculus, is to perform differential and integral calculus with exponential functions and logarithms. A general exponential function y = a has a derivative, given by a limit: The parenthesized limit on the right is independent of the variable x. Its value turns out to be the logarithm of a to base e. Thus, when the value of a is set to e, this limit is equal to 1, and so one arrives at the following simple identity: Consequently, the exponential function with base e is particularly suited to doing calculus. Choosing e (as opposed to some other number) as the base of the exponential function makes calculations involving the derivatives much simpler. Another motivation comes from considering the derivative of the base-a logarithm (i.e., loga x), for x > 0: where the substitution u = h/x was made. The base-a logarithm of e is 1, if a equals e. So symbolically, The logarithm with this special base is called the natural logarithm, and is denoted as ln; it behaves well under differentiation since there is no undetermined limit to carry through the calculations. Thus, there are two ways of selecting such special numbers a. One way is to set the derivative of the exponential function a equal to a, and solve for a. The other way is to set the derivative of the base a logarithm to 1/x and solve for a. In each case, one arrives at a convenient choice of base for doing calculus. It turns out that these two solutions for a are actually the same: the number e. Other characterizations of e are also possible: one is as the limit of a sequence, another is as the sum of an infinite series, and still others rely on integral calculus. So far, the following two (equivalent) properties have been introduced: The following four characterizations can be proved to be equivalent: Similarly: As in the motivation, the exponential function e is important in part because it is the unique function (up to multiplication by a constant K) that is equal to its own derivative: and therefore its own antiderivative as well: Equivalently, the family of functions where K is any real or complex number, is the full solution to the differential equation The number e is the unique real number such that for all positive x. Also, we have the inequality for all real x, with equality if and only if x = 0. Furthermore, e is the unique base of the exponential for which the inequality a ≥ x + 1 holds for all x. This is a limiting case of Bernoulli's inequality. Steiner's problem asks to find the global maximum for the function This maximum occurs precisely at x = e. (One can check that the derivative of ln f(x) is zero only for this value of x.) Similarly, x = 1/e is where the global minimum occurs for the function The infinite tetration converges if and only if x ∈ [(1/e), e] ≈ [0.06599, 1.4447] , shown by a theorem of Leonhard Euler. The real number e is irrational. Euler proved this by showing that its simple continued fraction expansion does not terminate. (See also Fourier's proof that e is irrational.) Furthermore, by the Lindemann–Weierstrass theorem, e is transcendental, meaning that it is not a solution of any non-zero polynomial equation with rational coefficients. It was the first number to be proved transcendental without having been specifically constructed for this purpose (compare with Liouville number); the proof was given by Charles Hermite in 1873. It is conjectured that e is normal, meaning that when e is expressed in any base the possible digits in that base are uniformly distributed (occur with equal probability in any sequence of given length). It is conjectured that e is not a Kontsevich-Zagier period. The exponential function e may be written as a Taylor series Because this series is convergent for every complex value of x, it is commonly used to extend the definition of e to the complex numbers. This, with the Taylor series for sin and cos x, allows one to derive Euler's formula: which holds for every complex x. The special case with x = π is Euler's identity: from which it follows that, in the principal branch of the logarithm, Furthermore, using the laws for exponentiation, which is de Moivre's formula. The expressions of cos x and sin x in terms of the exponential function can be deduced from the Taylor series: The expression cos x + i sin x {\textstyle \cos x+i\sin x} is sometimes abbreviated as cis(x). The number e can be represented in a variety of ways: as an infinite series, an infinite product, a continued fraction, or a limit of a sequence. Two of these representations, often used in introductory calculus courses, are the limit given above, and the series obtained by evaluating at x = 1 the above power series representation of e. Less common is the continued fraction which written out looks like This continued fraction for e converges three times as quickly: Many other series, sequence, continued fraction, and infinite product representations of e have been proved. In addition to exact analytical expressions for representation of e, there are stochastic techniques for estimating e. One such approach begins with an infinite sequence of independent random variables X1, X2..., drawn from the uniform distribution on [0, 1]. Let V be the least number n such that the sum of the first n observations exceeds 1: Then the expected value of V is e: E(V) = e. The number of known digits of e has increased substantially during the last decades. This is due both to the increased performance of computers and to algorithmic improvements. Since around 2010, the proliferation of modern high-speed desktop computers has made it feasible for amateurs to compute trillions of digits of e within acceptable amounts of time. On Dec 5, 2020, a record-setting calculation was made, giving e to 31,415,926,535,897 (approximately π×10) digits. One way to compute the digits of e is with the series A faster method involves two recursive functions p ( a , b ) {\displaystyle p(a,b)} and q ( a , b ) {\displaystyle q(a,b)} . The functions are defined as . The expression produces the digits of e. This method uses binary splitting to compute e with fewer single-digit arithmetic operations and reduced bit complexity. Combining this with Fast Fourier Transform-based methods of multiplying integers makes computing the digits very fast. During the emergence of internet culture, individuals and organizations sometimes paid homage to the number e. In an early example, the computer scientist Donald Knuth let the version numbers of his program Metafont approach e. The versions are 2, 2.7, 2.71, 2.718, and so forth. In another instance, the IPO filing for Google in 2004, rather than a typical round-number amount of money, the company announced its intention to raise 2,718,281,828 USD, which is e billion dollars rounded to the nearest dollar. Google was also responsible for a billboard that appeared in the heart of Silicon Valley, and later in Cambridge, Massachusetts; Seattle, Washington; and Austin, Texas. It read "{first 10-digit prime found in consecutive digits of e}.com". The first 10-digit prime in e is 7427466391, which starts at the 99th digit. Solving this problem and visiting the advertised (now defunct) website led to an even more difficult problem to solve, which consisted in finding the fifth term in the sequence 7182818284, 8182845904, 8747135266, 7427466391. It turned out that the sequence consisted of 10-digit numbers found in consecutive digits of e whose digits summed to 49. The fifth term in the sequence is 5966290435, which starts at the 127th digit. Solving this second problem finally led to a Google Labs webpage where the visitor was invited to submit a résumé.
[ { "paragraph_id": 0, "text": "The number e, also known as Euler's number, is a mathematical constant approximately equal to 2.71828 that can be characterized in many ways. It is the base of natural logarithms. It is the limit of (1 + 1/n) as n approaches infinity, an expression that arises in the computation of compound interest. It can also be calculated as the sum of the infinite series", "title": "" }, { "paragraph_id": 1, "text": "It is also the unique positive number a such that the graph of the function y = a has a slope of 1 at x = 0.", "title": "" }, { "paragraph_id": 2, "text": "The (natural) exponential function f(x) = e is the unique function f that equals its own derivative and satisfies the equation f(0) = 1; hence one can also define e as f(1). The natural logarithm, or logarithm to base e, is the inverse function to the natural exponential function. The natural logarithm of a number k > 1 can be defined directly as the area under the curve y = 1/x between x = 1 and x = k, in which case e is the value of k for which this area equals 1 (see image). There are various other characterizations.", "title": "" }, { "paragraph_id": 3, "text": "The number e is sometimes called Euler's number (not to be confused with Euler's constant γ {\\displaystyle \\gamma } )—after the Swiss mathematician Leonhard Euler—or Napier's constant—after John Napier. The constant was discovered by the Swiss mathematician Jacob Bernoulli while studying compound interest.", "title": "" }, { "paragraph_id": 4, "text": "The number e is of great importance in mathematics, alongside 0, 1, π, and i. All five appear in one formulation of Euler's identity e i π + 1 = 0 {\\displaystyle e^{i\\pi }+1=0} and play important and recurring roles across mathematics. Like the constant π, e is irrational (it cannot be represented as a ratio of integers) and transcendental (it is not a root of any non-zero polynomial with rational coefficients). To 40 decimal places, the value of e is:", "title": "" }, { "paragraph_id": 5, "text": "The first references to the constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms to the base e {\\displaystyle e} . It is assumed that the table was written by William Oughtred.", "title": "History" }, { "paragraph_id": 6, "text": "The constant itself was introduced by Jacob Bernoulli in 1683, for solving the problem of continuous compounding of interest. In his solution, the constant e occurs as the limit", "title": "History" }, { "paragraph_id": 7, "text": "where n represents the number of intervals in a year on which the compound interest is evaluated (for example, n = 12 {\\displaystyle n=12} for monthly compounding).", "title": "History" }, { "paragraph_id": 8, "text": "The first symbol used for this constant was the letter b by Gottfried Leibniz in letters to Christiaan Huygens in 1690 and 1691.", "title": "History" }, { "paragraph_id": 9, "text": "Leonhard Euler started to use the letter e for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and in a letter to Christian Goldbach on 25 November 1731. The first appearance of e in a printed publication was in Euler's Mechanica (1736). It is unknown why Euler chose the letter e. Although some researchers used the letter c in the subsequent years, the letter e was more common and eventually became standard.", "title": "History" }, { "paragraph_id": 10, "text": "Jacob Bernoulli discovered this constant in 1683, while studying a question about compound interest:", "title": "Applications" }, { "paragraph_id": 11, "text": "An account starts with $1.00 and pays 100 percent interest per year. If the interest is credited once, at the end of the year, the value of the account at year-end will be $2.00. What happens if the interest is computed and credited more frequently during the year?", "title": "Applications" }, { "paragraph_id": 12, "text": "If the interest is credited twice in the year, the interest rate for each 6 months will be 50%, so the initial $1 is multiplied by 1.5 twice, yielding $1.00 × 1.5 = $2.25 at the end of the year. Compounding quarterly yields $1.00 × 1.25 = $2.44140625, and compounding monthly yields $1.00 × (1 + 1/12) = $2.613035.... If there are n compounding intervals, the interest for each interval will be 100%/n and the value at the end of the year will be $1.00 × (1 + 1/n).", "title": "Applications" }, { "paragraph_id": 13, "text": "Bernoulli noticed that this sequence approaches a limit (the force of interest) with larger n and, thus, smaller compounding intervals. Compounding weekly (n = 52) yields $2.692596..., while compounding daily (n = 365) yields $2.714567... (approximately two cents more). The limit as n grows large is the number that came to be known as e. That is, with continuous compounding, the account value will reach $2.718281828...", "title": "Applications" }, { "paragraph_id": 14, "text": "More generally, an account that starts at $1 and offers an annual interest rate of R will, after t years, yield e dollars with continuous compounding.", "title": "Applications" }, { "paragraph_id": 15, "text": "(Note here that R is the decimal equivalent of the rate of interest expressed as a percentage, so for 5% interest, R = 5/100 = 0.05.)", "title": "Applications" }, { "paragraph_id": 16, "text": "The number e itself also has applications in probability theory, in a way that is not obviously related to exponential growth. Suppose that a gambler plays a slot machine that pays out with a probability of one in n and plays it n times. As n increases, the probability that gambler will lose all n bets approaches 1/e. For n = 20, this is already approximately 1/2.789509....", "title": "Applications" }, { "paragraph_id": 17, "text": "This is an example of a Bernoulli trial process. Each time the gambler plays the slots, there is a one in n chance of winning. Playing n times is modeled by the binomial distribution, which is closely related to the binomial theorem and Pascal's triangle. The probability of winning k times out of n trials is:", "title": "Applications" }, { "paragraph_id": 18, "text": "In particular, the probability of winning zero times (k = 0) is", "title": "Applications" }, { "paragraph_id": 19, "text": "The limit of the above expression, as n tends to infinity, is precisely 1/e.", "title": "Applications" }, { "paragraph_id": 20, "text": "The normal distribution with zero mean and unit standard deviation is known as the standard normal distribution, given by the probability density function", "title": "Applications" }, { "paragraph_id": 21, "text": "The constraint of unit variance (and thus also unit standard deviation) results in the 1/2 in the exponent, and the constraint of unit total area under the curve ϕ ( x ) {\\displaystyle \\phi (x)} results in the factor 1 / 2 π {\\displaystyle \\textstyle 1/{\\sqrt {2\\pi }}} . This function is symmetric around x = 0, where it attains its maximum value 1 / 2 π {\\displaystyle \\textstyle 1/{\\sqrt {2\\pi }}} , and has inflection points at x = ±1.", "title": "Applications" }, { "paragraph_id": 22, "text": "Another application of e, also discovered in part by Jacob Bernoulli along with Pierre Remond de Montmort, is in the problem of derangements, also known as the hat check problem: n guests are invited to a party and, at the door, the guests all check their hats with the butler, who in turn places the hats into n boxes, each labelled with the name of one guest. But the butler has not asked the identities of the guests, and so puts the hats into boxes selected at random. The problem of de Montmort is to find the probability that none of the hats gets put into the right box. This probability, denoted by p n {\\displaystyle p_{n}\\!} , is:", "title": "Applications" }, { "paragraph_id": 23, "text": "As n tends to infinity, pn approaches 1/e. Furthermore, the number of ways the hats can be placed into the boxes so that none of the hats are in the right box is n!/e, rounded to the nearest integer, for every positive n.", "title": "Applications" }, { "paragraph_id": 24, "text": "The maximum value of x x {\\displaystyle {\\sqrt[{x}]{x}}} occurs at x = e {\\displaystyle x=e} . Equivalently, for any value of the base b > 1, it is the case that the maximum value of x − 1 log b x {\\displaystyle x^{-1}\\log _{b}x} occurs at x = e {\\displaystyle x=e} (Steiner's problem, discussed below).", "title": "Applications" }, { "paragraph_id": 25, "text": "This is useful in the problem of a stick of length L that is broken into n equal parts. The value of n that maximizes the product of the lengths is then either", "title": "Applications" }, { "paragraph_id": 26, "text": "The quantity x − 1 log b x {\\displaystyle x^{-1}\\log _{b}x} is also a measure of information gleaned from an event occurring with probability 1 / x {\\displaystyle 1/x} , so that essentially the same optimal division appears in optimal planning problems like the secretary problem.", "title": "Applications" }, { "paragraph_id": 27, "text": "The number e occurs naturally in connection with many problems involving asymptotics. An example is Stirling's formula for the asymptotics of the factorial function, in which both the numbers e and π appear:", "title": "Applications" }, { "paragraph_id": 28, "text": "As a consequence,", "title": "Applications" }, { "paragraph_id": 29, "text": "The principal motivation for introducing the number e, particularly in calculus, is to perform differential and integral calculus with exponential functions and logarithms. A general exponential function y = a has a derivative, given by a limit:", "title": "In calculus" }, { "paragraph_id": 30, "text": "The parenthesized limit on the right is independent of the variable x. Its value turns out to be the logarithm of a to base e. Thus, when the value of a is set to e, this limit is equal to 1, and so one arrives at the following simple identity:", "title": "In calculus" }, { "paragraph_id": 31, "text": "Consequently, the exponential function with base e is particularly suited to doing calculus. Choosing e (as opposed to some other number) as the base of the exponential function makes calculations involving the derivatives much simpler.", "title": "In calculus" }, { "paragraph_id": 32, "text": "Another motivation comes from considering the derivative of the base-a logarithm (i.e., loga x), for x > 0:", "title": "In calculus" }, { "paragraph_id": 33, "text": "where the substitution u = h/x was made. The base-a logarithm of e is 1, if a equals e. So symbolically,", "title": "In calculus" }, { "paragraph_id": 34, "text": "The logarithm with this special base is called the natural logarithm, and is denoted as ln; it behaves well under differentiation since there is no undetermined limit to carry through the calculations.", "title": "In calculus" }, { "paragraph_id": 35, "text": "Thus, there are two ways of selecting such special numbers a. One way is to set the derivative of the exponential function a equal to a, and solve for a. The other way is to set the derivative of the base a logarithm to 1/x and solve for a. In each case, one arrives at a convenient choice of base for doing calculus. It turns out that these two solutions for a are actually the same: the number e.", "title": "In calculus" }, { "paragraph_id": 36, "text": "Other characterizations of e are also possible: one is as the limit of a sequence, another is as the sum of an infinite series, and still others rely on integral calculus. So far, the following two (equivalent) properties have been introduced:", "title": "In calculus" }, { "paragraph_id": 37, "text": "The following four characterizations can be proved to be equivalent:", "title": "In calculus" }, { "paragraph_id": 38, "text": "Similarly:", "title": "In calculus" }, { "paragraph_id": 39, "text": "As in the motivation, the exponential function e is important in part because it is the unique function (up to multiplication by a constant K) that is equal to its own derivative:", "title": "Properties" }, { "paragraph_id": 40, "text": "and therefore its own antiderivative as well:", "title": "Properties" }, { "paragraph_id": 41, "text": "Equivalently, the family of functions", "title": "Properties" }, { "paragraph_id": 42, "text": "where K is any real or complex number, is the full solution to the differential equation", "title": "Properties" }, { "paragraph_id": 43, "text": "The number e is the unique real number such that", "title": "Properties" }, { "paragraph_id": 44, "text": "for all positive x.", "title": "Properties" }, { "paragraph_id": 45, "text": "Also, we have the inequality", "title": "Properties" }, { "paragraph_id": 46, "text": "for all real x, with equality if and only if x = 0. Furthermore, e is the unique base of the exponential for which the inequality a ≥ x + 1 holds for all x. This is a limiting case of Bernoulli's inequality.", "title": "Properties" }, { "paragraph_id": 47, "text": "Steiner's problem asks to find the global maximum for the function", "title": "Properties" }, { "paragraph_id": 48, "text": "This maximum occurs precisely at x = e. (One can check that the derivative of ln f(x) is zero only for this value of x.)", "title": "Properties" }, { "paragraph_id": 49, "text": "Similarly, x = 1/e is where the global minimum occurs for the function", "title": "Properties" }, { "paragraph_id": 50, "text": "The infinite tetration", "title": "Properties" }, { "paragraph_id": 51, "text": "converges if and only if x ∈ [(1/e), e] ≈ [0.06599, 1.4447] , shown by a theorem of Leonhard Euler.", "title": "Properties" }, { "paragraph_id": 52, "text": "The real number e is irrational. Euler proved this by showing that its simple continued fraction expansion does not terminate. (See also Fourier's proof that e is irrational.)", "title": "Properties" }, { "paragraph_id": 53, "text": "Furthermore, by the Lindemann–Weierstrass theorem, e is transcendental, meaning that it is not a solution of any non-zero polynomial equation with rational coefficients. It was the first number to be proved transcendental without having been specifically constructed for this purpose (compare with Liouville number); the proof was given by Charles Hermite in 1873.", "title": "Properties" }, { "paragraph_id": 54, "text": "It is conjectured that e is normal, meaning that when e is expressed in any base the possible digits in that base are uniformly distributed (occur with equal probability in any sequence of given length).", "title": "Properties" }, { "paragraph_id": 55, "text": "It is conjectured that e is not a Kontsevich-Zagier period.", "title": "Properties" }, { "paragraph_id": 56, "text": "The exponential function e may be written as a Taylor series", "title": "Properties" }, { "paragraph_id": 57, "text": "Because this series is convergent for every complex value of x, it is commonly used to extend the definition of e to the complex numbers. This, with the Taylor series for sin and cos x, allows one to derive Euler's formula:", "title": "Properties" }, { "paragraph_id": 58, "text": "which holds for every complex x. The special case with x = π is Euler's identity:", "title": "Properties" }, { "paragraph_id": 59, "text": "from which it follows that, in the principal branch of the logarithm,", "title": "Properties" }, { "paragraph_id": 60, "text": "Furthermore, using the laws for exponentiation,", "title": "Properties" }, { "paragraph_id": 61, "text": "which is de Moivre's formula.", "title": "Properties" }, { "paragraph_id": 62, "text": "The expressions of cos x and sin x in terms of the exponential function can be deduced from the Taylor series:", "title": "Properties" }, { "paragraph_id": 63, "text": "The expression cos x + i sin x {\\textstyle \\cos x+i\\sin x} is sometimes abbreviated as cis(x).", "title": "Properties" }, { "paragraph_id": 64, "text": "The number e can be represented in a variety of ways: as an infinite series, an infinite product, a continued fraction, or a limit of a sequence. Two of these representations, often used in introductory calculus courses, are the limit", "title": "Representations" }, { "paragraph_id": 65, "text": "given above, and the series", "title": "Representations" }, { "paragraph_id": 66, "text": "obtained by evaluating at x = 1 the above power series representation of e.", "title": "Representations" }, { "paragraph_id": 67, "text": "Less common is the continued fraction", "title": "Representations" }, { "paragraph_id": 68, "text": "which written out looks like", "title": "Representations" }, { "paragraph_id": 69, "text": "This continued fraction for e converges three times as quickly:", "title": "Representations" }, { "paragraph_id": 70, "text": "Many other series, sequence, continued fraction, and infinite product representations of e have been proved.", "title": "Representations" }, { "paragraph_id": 71, "text": "In addition to exact analytical expressions for representation of e, there are stochastic techniques for estimating e. One such approach begins with an infinite sequence of independent random variables X1, X2..., drawn from the uniform distribution on [0, 1]. Let V be the least number n such that the sum of the first n observations exceeds 1:", "title": "Representations" }, { "paragraph_id": 72, "text": "Then the expected value of V is e: E(V) = e.", "title": "Representations" }, { "paragraph_id": 73, "text": "The number of known digits of e has increased substantially during the last decades. This is due both to the increased performance of computers and to algorithmic improvements.", "title": "Representations" }, { "paragraph_id": 74, "text": "Since around 2010, the proliferation of modern high-speed desktop computers has made it feasible for amateurs to compute trillions of digits of e within acceptable amounts of time. On Dec 5, 2020, a record-setting calculation was made, giving e to 31,415,926,535,897 (approximately π×10) digits.", "title": "Representations" }, { "paragraph_id": 75, "text": "One way to compute the digits of e is with the series", "title": "Computing the digits" }, { "paragraph_id": 76, "text": "A faster method involves two recursive functions p ( a , b ) {\\displaystyle p(a,b)} and q ( a , b ) {\\displaystyle q(a,b)} . The functions are defined as", "title": "Computing the digits" }, { "paragraph_id": 77, "text": ". The expression", "title": "Computing the digits" }, { "paragraph_id": 78, "text": "produces the digits of e. This method uses binary splitting to compute e with fewer single-digit arithmetic operations and reduced bit complexity. Combining this with Fast Fourier Transform-based methods of multiplying integers makes computing the digits very fast.", "title": "Computing the digits" }, { "paragraph_id": 79, "text": "During the emergence of internet culture, individuals and organizations sometimes paid homage to the number e.", "title": "In computer culture" }, { "paragraph_id": 80, "text": "In an early example, the computer scientist Donald Knuth let the version numbers of his program Metafont approach e. The versions are 2, 2.7, 2.71, 2.718, and so forth.", "title": "In computer culture" }, { "paragraph_id": 81, "text": "In another instance, the IPO filing for Google in 2004, rather than a typical round-number amount of money, the company announced its intention to raise 2,718,281,828 USD, which is e billion dollars rounded to the nearest dollar.", "title": "In computer culture" }, { "paragraph_id": 82, "text": "Google was also responsible for a billboard that appeared in the heart of Silicon Valley, and later in Cambridge, Massachusetts; Seattle, Washington; and Austin, Texas. It read \"{first 10-digit prime found in consecutive digits of e}.com\". The first 10-digit prime in e is 7427466391, which starts at the 99th digit. Solving this problem and visiting the advertised (now defunct) website led to an even more difficult problem to solve, which consisted in finding the fifth term in the sequence 7182818284, 8182845904, 8747135266, 7427466391. It turned out that the sequence consisted of 10-digit numbers found in consecutive digits of e whose digits summed to 49. The fifth term in the sequence is 5966290435, which starts at the 127th digit. Solving this second problem finally led to a Google Labs webpage where the visitor was invited to submit a résumé.", "title": "In computer culture" }, { "paragraph_id": 83, "text": "", "title": "External links" } ]
The number e, also known as Euler's number, is a mathematical constant approximately equal to 2.71828 that can be characterized in many ways. It is the base of natural logarithms. It is the limit ofn as n approaches infinity, an expression that arises in the computation of compound interest. It can also be calculated as the sum of the infinite series It is also the unique positive number a such that the graph of the function y = ax has a slope of 1 at x = 0. The (natural) exponential function f(x) = ex is the unique function f that equals its own derivative and satisfies the equation f(0) = 1; hence one can also define e as f(1). The natural logarithm, or logarithm to base e, is the inverse function to the natural exponential function. The natural logarithm of a number k > 1 can be defined directly as the area under the curve y = 1/x between x = 1 and x = k, in which case e is the value of k for which this area equals 1. There are various other characterizations. The number e is sometimes called Euler's number—after the Swiss mathematician Leonhard Euler—or Napier's constant—after John Napier. The constant was discovered by the Swiss mathematician Jacob Bernoulli while studying compound interest. The number e is of great importance in mathematics, alongside 0, 1, π, and i. All five appear in one formulation of Euler's identity e i π + 1 = 0 and play important and recurring roles across mathematics. Like the constant π, e is irrational and transcendental. To 40 decimal places, the value of e is:
2001-11-08T20:21:59Z
2024-01-01T01:15:27Z
[ "Template:Good article", "Template:Short description", "Template:Nowrap", "Template:Blockquote", "Template:Secondary source needed", "Template:Cite news", "Template:For", "Template:Infobox mathematical constant", "Template:Block indent", "Template:Cite journal", "Template:Wikiquote", "Template:Math", "Template:Main", "Template:Citation needed", "Template:Irrational number", "Template:Redirect", "Template:E (mathematical constant)", "Template:See also", "Template:Cite OEIS", "Template:Isbn", "Template:Frac2", "Template:Clarify", "Template:Cite book", "Template:Commons category", "Template:Radic", "Template:Pp-move-indef", "Template:Avoid wrap", "Template:Pi", "Template:X10^", "Template:Lang", "Template:Mdash", "Template:Em", "Template:Ordered list", "Template:Reflist", "Template:Webarchive", "Template:Mvar", "Template:Cite web", "Template:Cite magazine", "Template:Authority control" ]
https://en.wikipedia.org/wiki/E_(mathematical_constant)
9,637
Euler–Maclaurin formula
In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence. The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula. If m and n are natural numbers and f(x) is a real or complex valued continuous function for real numbers x in the interval [m,n], then the integral can be approximated by the sum (or vice versa) (see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives f(x) evaluated at the endpoints of the interval, that is to say x = m and x = n. Explicitly, for p a positive integer and a function f(x) that is p times continuously differentiable on the interval [m,n], we have where Bk is the kth Bernoulli number (with B1 = 1/2) and Rp is an error term which depends on n, m, p, and f and is usually small for suitable values of p. The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for B1. In this case we have or alternatively The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals [r, r + 1] for r = m, m + 1, …, n − 1. The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term. The remainder term has an exact expression in terms of the periodized Bernoulli functions Pk(x). The Bernoulli polynomials may be defined recursively by B0(x) = 1 and, for k ≥ 1, The periodized Bernoulli functions are defined as where ⌊x⌋ denotes the largest integer less than or equal to x, so that x − ⌊x⌋ always lies in the interval [0,1). With this notation, the remainder term Rp equals When k > 0, it can be shown that where ζ denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials Bk(x). The bound is achieved for even k when x is zero. The term ζ(k) may be omitted for odd k but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as The Bernoulli numbers from B1 to B7 are 1/2, 1/6, 0, −1/30, 0, 1/42, 0. Therefore the low-order cases of the Euler–Maclaurin formula are: The Basel problem is to determine the sum Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals π/6, which he proved in the same year. If f is a polynomial and p is big enough, then the remainder term vanishes. For instance, if f(x) = x, we can choose p = 2 to obtain, after simplification, The formula provides a means of approximating a finite integral. Let a < b be the endpoints of the interval of integration. Fix N, the number of points to use in the approximation, and denote the corresponding step size by h = b − a/N − 1. Set xi = a + (i − 1)h, so that x1 = a and xN = b. Then: This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some p, depending upon f and h, such that the terms past order p increase rapidly. Thus, the remainder term generally demands close attention. The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation. In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is where a and b are integers. Often the expansion remains valid even after taking the limits a → −∞ or b → +∞ or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example, Here the left-hand side is equal to ψ(z), namely the first-order polygamma function defined by the gamma function Γ(z) is equal to (z − 1)! when z is a positive integer. This results in an asymptotic expansion for ψ(z). That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function. If s is an integer greater than 1 we have: Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion: For s equal to 2 this simplifies to or When s = 1, the corresponding technique gives an asymptotic expansion for the harmonic numbers: where γ ≈ 0.5772... is the Euler–Mascheroni constant. We outline the argument given in Apostol. The Bernoulli polynomials Bn(x) and the periodic Bernoulli functions Pn(x) for n = 0, 1, 2, ... were introduced above. The first several Bernoulli polynomials are The values Bn(1) are the Bernoulli numbers Bn. Notice that for n ≠ 1 we have and for n = 1, The functions Pn agree with the Bernoulli polynomials on the interval [0, 1] and are periodic with period 1. Furthermore, except when n = 1, they are also continuous. Thus, Let k be an integer, and consider the integral where Integrating by parts, we get Using B1(0) = −1/2, B1(1) = 1/2, and summing the above from k = 0 to k = n − 1, we get Adding f(n) − f(0)/2 to both sides and rearranging, we have This is the p = 1 case of the summation formula. To continue the induction, we apply integration by parts to the error term: where The result of integrating by parts is Summing from k = 0 to k = n − 1 and substituting this for the lower order error term results in the p = 2 case of the formula, This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions.
[ { "paragraph_id": 0, "text": "In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence.", "title": "" }, { "paragraph_id": 1, "text": "The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula.", "title": "" }, { "paragraph_id": 2, "text": "If m and n are natural numbers and f(x) is a real or complex valued continuous function for real numbers x in the interval [m,n], then the integral", "title": "The formula" }, { "paragraph_id": 3, "text": "can be approximated by the sum (or vice versa)", "title": "The formula" }, { "paragraph_id": 4, "text": "(see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives f(x) evaluated at the endpoints of the interval, that is to say x = m and x = n.", "title": "The formula" }, { "paragraph_id": 5, "text": "Explicitly, for p a positive integer and a function f(x) that is p times continuously differentiable on the interval [m,n], we have", "title": "The formula" }, { "paragraph_id": 6, "text": "where Bk is the kth Bernoulli number (with B1 = 1/2) and Rp is an error term which depends on n, m, p, and f and is usually small for suitable values of p.", "title": "The formula" }, { "paragraph_id": 7, "text": "The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for B1. In this case we have", "title": "The formula" }, { "paragraph_id": 8, "text": "or alternatively", "title": "The formula" }, { "paragraph_id": 9, "text": "The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals [r, r + 1] for r = m, m + 1, …, n − 1. The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term.", "title": "The formula" }, { "paragraph_id": 10, "text": "The remainder term has an exact expression in terms of the periodized Bernoulli functions Pk(x). The Bernoulli polynomials may be defined recursively by B0(x) = 1 and, for k ≥ 1,", "title": "The formula" }, { "paragraph_id": 11, "text": "The periodized Bernoulli functions are defined as", "title": "The formula" }, { "paragraph_id": 12, "text": "where ⌊x⌋ denotes the largest integer less than or equal to x, so that x − ⌊x⌋ always lies in the interval [0,1).", "title": "The formula" }, { "paragraph_id": 13, "text": "With this notation, the remainder term Rp equals", "title": "The formula" }, { "paragraph_id": 14, "text": "When k > 0, it can be shown that", "title": "The formula" }, { "paragraph_id": 15, "text": "where ζ denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials Bk(x). The bound is achieved for even k when x is zero. The term ζ(k) may be omitted for odd k but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as", "title": "The formula" }, { "paragraph_id": 16, "text": "The Bernoulli numbers from B1 to B7 are 1/2, 1/6, 0, −1/30, 0, 1/42, 0. Therefore the low-order cases of the Euler–Maclaurin formula are:", "title": "The formula" }, { "paragraph_id": 17, "text": "The Basel problem is to determine the sum", "title": "Applications" }, { "paragraph_id": 18, "text": "Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals π/6, which he proved in the same year.", "title": "Applications" }, { "paragraph_id": 19, "text": "If f is a polynomial and p is big enough, then the remainder term vanishes. For instance, if f(x) = x, we can choose p = 2 to obtain, after simplification,", "title": "Applications" }, { "paragraph_id": 20, "text": "The formula provides a means of approximating a finite integral. Let a < b be the endpoints of the interval of integration. Fix N, the number of points to use in the approximation, and denote the corresponding step size by h = b − a/N − 1. Set xi = a + (i − 1)h, so that x1 = a and xN = b. Then:", "title": "Applications" }, { "paragraph_id": 21, "text": "This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some p, depending upon f and h, such that the terms past order p increase rapidly. Thus, the remainder term generally demands close attention.", "title": "Applications" }, { "paragraph_id": 22, "text": "The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation.", "title": "Applications" }, { "paragraph_id": 23, "text": "In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is", "title": "Applications" }, { "paragraph_id": 24, "text": "where a and b are integers. Often the expansion remains valid even after taking the limits a → −∞ or b → +∞ or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example,", "title": "Applications" }, { "paragraph_id": 25, "text": "Here the left-hand side is equal to ψ(z), namely the first-order polygamma function defined by", "title": "Applications" }, { "paragraph_id": 26, "text": "the gamma function Γ(z) is equal to (z − 1)! when z is a positive integer. This results in an asymptotic expansion for ψ(z). That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function.", "title": "Applications" }, { "paragraph_id": 27, "text": "If s is an integer greater than 1 we have:", "title": "Applications" }, { "paragraph_id": 28, "text": "Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion:", "title": "Applications" }, { "paragraph_id": 29, "text": "For s equal to 2 this simplifies to", "title": "Applications" }, { "paragraph_id": 30, "text": "or", "title": "Applications" }, { "paragraph_id": 31, "text": "When s = 1, the corresponding technique gives an asymptotic expansion for the harmonic numbers:", "title": "Applications" }, { "paragraph_id": 32, "text": "where γ ≈ 0.5772... is the Euler–Mascheroni constant.", "title": "Applications" }, { "paragraph_id": 33, "text": "We outline the argument given in Apostol.", "title": "Proofs" }, { "paragraph_id": 34, "text": "The Bernoulli polynomials Bn(x) and the periodic Bernoulli functions Pn(x) for n = 0, 1, 2, ... were introduced above.", "title": "Proofs" }, { "paragraph_id": 35, "text": "The first several Bernoulli polynomials are", "title": "Proofs" }, { "paragraph_id": 36, "text": "The values Bn(1) are the Bernoulli numbers Bn. Notice that for n ≠ 1 we have", "title": "Proofs" }, { "paragraph_id": 37, "text": "and for n = 1,", "title": "Proofs" }, { "paragraph_id": 38, "text": "The functions Pn agree with the Bernoulli polynomials on the interval [0, 1] and are periodic with period 1. Furthermore, except when n = 1, they are also continuous. Thus,", "title": "Proofs" }, { "paragraph_id": 39, "text": "Let k be an integer, and consider the integral", "title": "Proofs" }, { "paragraph_id": 40, "text": "where", "title": "Proofs" }, { "paragraph_id": 41, "text": "Integrating by parts, we get", "title": "Proofs" }, { "paragraph_id": 42, "text": "Using B1(0) = −1/2, B1(1) = 1/2, and summing the above from k = 0 to k = n − 1, we get", "title": "Proofs" }, { "paragraph_id": 43, "text": "Adding f(n) − f(0)/2 to both sides and rearranging, we have", "title": "Proofs" }, { "paragraph_id": 44, "text": "This is the p = 1 case of the summation formula. To continue the induction, we apply integration by parts to the error term:", "title": "Proofs" }, { "paragraph_id": 45, "text": "where", "title": "Proofs" }, { "paragraph_id": 46, "text": "The result of integrating by parts is", "title": "Proofs" }, { "paragraph_id": 47, "text": "Summing from k = 0 to k = n − 1 and substituting this for the lower order error term results in the p = 2 case of the formula,", "title": "Proofs" }, { "paragraph_id": 48, "text": "This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions.", "title": "Proofs" } ]
In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence. The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula.
2023-05-26T14:23:41Z
[ "Template:Cite book", "Template:MathWorld", "Template:Calculus topics", "Template:Short description", "Template:Reflist", "Template:Cite journal", "Template:Cite web", "Template:Use American English", "Template:Math", "Template:Seealso", "Template:Leonhard Euler", "Template:Mvar", "Template:Refbegin", "Template:Refend" ]
https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula
9,638
Epimenides paradox
The Epimenides paradox reveals a problem with self-reference in logic. It is named after the Cretan philosopher Epimenides of Knossos (alive circa 600 BC) who is credited with the original statement. A typical description of the problem is given in the book Gödel, Escher, Bach, by Douglas Hofstadter: Epimenides was a Cretan who made the immortal statement: "All Cretans are liars." A paradox of self-reference arises when one considers whether it is possible for Epimenides to have spoken the truth. According to Ptolemaeus Chennus, Thetis and Medea had once argued in Thessaly over which was the most beautiful; they appointed the Cretan Idomeneus as the judge, who gave the victory to Thetis. In her anger, Medea called all Cretans liars, and cursed them to never say the truth. Thomas Fowler (1869) states the paradox as follows: "Epimenides the Cretan says, 'that all the Cretans are liars,' but Epimenides is himself a Cretan; therefore he is himself a liar. But if he is a liar, what he says is untrue, and consequently, the Cretans are veracious; but Epimenides is a Cretan, and therefore what he says is true; saying the Cretans are liars, Epimenides is himself a liar, and what he says is untrue. Thus we may go on alternately proving that Epimenides and the Cretans are truthful and untruthful." If we assume the statement is false and that Epimenides is lying about all Cretans being liars, then there must exist at least one Cretan who is honest. This does not lead to a contradiction since it is not required that this Cretan be Epimenides. This means that Epimenides can say the false statement that all Cretans are liars while knowing at least one honest Cretan and lying about this particular Cretan. Hence, from the assumption that the statement is false, it does not follow that the statement is true. So we can avoid a paradox as seeing the statement "all Cretans are liars" as a false statement, which is made by a lying Cretan, Epimenides. The mistake made by Thomas Fowler (and many other people) above is to think that the negation of "all Cretans are liars" is "all Cretans are honest" (a paradox) when in fact the negation is "there exists a Cretan who is honest", or "not all Cretans are liars". The Epimenides paradox can be slightly modified as to not allow the kind of solution described above, as it was in the first paradox of Eubulides but instead leading to a non-avoidable self-contradiction. Paradoxical versions of the Epimenides problem are closely related to a class of more difficult logical problems, including the liar paradox, Socratic paradox and the Burali-Forti paradox, all of which have self-reference in common with Epimenides. The Epimenides paradox is usually classified as a variation on the liar paradox, and sometimes the two are not distinguished. The study of self-reference led to important developments in logic and mathematics in the twentieth century. In other words, it is not a paradox once one realizes "All Cretans are liars" being untrue only means "Not all Cretans are liars" instead of the assumption that "All Cretans are honest". Perhaps better put, for "All Cretans are liars" to be a true statement, it does not mean that all Cretans must lie all the time. In fact, Cretans could tell the truth quite often, but still all be liars in the sense that liars are people prone to deception for dishonest gain. Considering that "All Cretans are liars" has been seen as a paradox only since the 19th century, this seems to resolve the alleged paradox. If 'all Cretans are continuous liars' is actually true, then asking a Cretan if they are honest would always elicit the dishonest answer 'yes'. So arguably the original proposition is not so much paradoxical as invalid. A contextual reading of the contradiction may also provide an answer to the paradox. The original phrase, "The Cretans, always liars, evil beasts, idle bellies!" asserts not an intrinsic paradox, but rather an opinion of the Cretans from Epimenides. A stereotyping of his people not intended to be an absolute statement about the people as a whole. Rather it is a claim made about their position regarding their religious beliefs and socio-cultural attitudes. Within the context of his poem the phrase is specific to a certain belief, a context that Callimachus repeats in his poem regarding Zeus. Further, a more poignant answer to the paradox is simply that to be a liar is to state falsehoods, nothing in the statement asserts everything said is false, but rather they're "always" lying. This is not an absolute statement of fact and thus we cannot conclude there's a true contradiction made by Epimenides with this statement. Epimenides was a 6th-century BC philosopher and religious prophet who, against the general sentiment of Crete, proposed that Zeus was immortal, as in the following poem: They fashioned a tomb for thee, O holy and high oneThe Cretans, always liars, evil beasts, idle bellies!But thou art not dead: thou livest and abidest forever,For in thee we live and move and have our being. Denying the immortality of Zeus, then, was the lie of the Cretans. The phrase "Cretans, always liars" was quoted by the poet Callimachus in his Hymn to Zeus, with the same theological intent as Epimenides: O Zeus, some say that thou wert born on the hills of Ida; Others, O Zeus, say in Arcadia; Did these or those, O Father lie? -- "Cretans are ever liars". Yea, a tomb, O Lord, for thee the Cretans builded; But thou didst not die, for thou art for ever. The logical inconsistency of a Cretan asserting all Cretans are always liars may not have occurred to Epimenides, nor to Callimachus, who both used the phrase to emphasize their point, without irony, perhaps meaning that all Cretans lie routinely, but not exclusively. In the 1st century AD, the quote is mentioned by the author of the Epistle to Titus as having been spoken truly by "one of their own prophets." "One of Crete's own prophets has said it: 'Cretans are always liars, evil brutes, idle bellies'.He has surely told the truth. For this reason correct them sternly, that they may be sound in faith instead of paying attention to Jewish fables and to commandments of people who turn their backs on the truth." Clement of Alexandria, in the late 2nd century AD, fails to indicate that the concept of logical paradox is an issue: In his epistle to Titus, Apostle Paul wants to warn Titus that Cretans don't believe in the one truth of Christianity, because "Cretans are always liars". To justify his claim, Apostle Paul cites Epimenides. During the early 4th century, Saint Augustine restates the closely related liar paradox in Against the Academicians (III.13.29), but without mentioning Epimenides. In the Middle Ages, many forms of the liar paradox were studied under the heading of insolubilia, but these were not explicitly associated with Epimenides. Finally, in 1740, the second volume of Pierre Bayle's Dictionnaire Historique et Critique explicitly connects Epimenides with the paradox, though Bayle labels the paradox a "sophisme". All of the works of Epimenides are now lost, and known only through quotations by other authors. The quotation from the Cretica of Epimenides is given by R.N. Longenecker, "Acts of the Apostles", in volume 9 of The Expositor's Bible Commentary, Frank E. Gaebelein, editor (Grand Rapids, Michigan: Zondervan Corporation, 1976–1984), page 476. Longenecker in turn cites M.D. Gibson, Horae Semiticae X (Cambridge: Cambridge University Press, 1913), page 40, "in Syriac". Longenecker states the following in a footnote: The Syr. version of the quatrain comes to us from the Syr. church father Isho'dad of Merv (probably based on the work of Theodore of Mopsuestia), which J.R. Harris translated back into Gr. in Exp ["The Expositor"] 7 (1907), p 336. An oblique reference to Epimenides in the context of logic appears in "The Logical Calculus" by W. E. Johnson, Mind (New Series), volume 1, number 2 (April, 1892), pages 235–250. Johnson writes in a footnote, Compare, for example, such occasions for fallacy as are supplied by "Epimenides is a liar" or "That surface is red," which may be resolved into "All or some statements of Epimenides are false," "All or some of the surface is red." The Epimenides paradox appears explicitly in "Mathematical Logic as Based on the Theory of Types", by Bertrand Russell, in the American Journal of Mathematics, volume 30, number 3 (July, 1908), pages 222–262, which opens with the following: The oldest contradiction of the kind in question is the Epimenides. Epimenides the Cretan said that all Cretans were liars, and all other statements made by Cretans were certainly lies. Was this a lie? In that article, Russell uses the Epimenides paradox as the point of departure for discussions of other problems, including the Burali-Forti paradox and the paradox now called Russell's paradox. Since Russell, the Epimenides paradox has been referenced repeatedly in logic. Typical of these references is Gödel, Escher, Bach by Douglas Hofstadter, which accords the paradox a prominent place in a discussion of self-reference. It is also believed that the "Cretan tales" told by Odysseus in The Odyssey by Homer are a reference to this paradox.
[ { "paragraph_id": 0, "text": "The Epimenides paradox reveals a problem with self-reference in logic. It is named after the Cretan philosopher Epimenides of Knossos (alive circa 600 BC) who is credited with the original statement. A typical description of the problem is given in the book Gödel, Escher, Bach, by Douglas Hofstadter:", "title": "" }, { "paragraph_id": 1, "text": "Epimenides was a Cretan who made the immortal statement: \"All Cretans are liars.\"", "title": "" }, { "paragraph_id": 2, "text": "A paradox of self-reference arises when one considers whether it is possible for Epimenides to have spoken the truth.", "title": "" }, { "paragraph_id": 3, "text": "According to Ptolemaeus Chennus, Thetis and Medea had once argued in Thessaly over which was the most beautiful; they appointed the Cretan Idomeneus as the judge, who gave the victory to Thetis. In her anger, Medea called all Cretans liars, and cursed them to never say the truth.", "title": "Mythology of lying Cretans" }, { "paragraph_id": 4, "text": "Thomas Fowler (1869) states the paradox as follows: \"Epimenides the Cretan says, 'that all the Cretans are liars,' but Epimenides is himself a Cretan; therefore he is himself a liar. But if he is a liar, what he says is untrue, and consequently, the Cretans are veracious; but Epimenides is a Cretan, and therefore what he says is true; saying the Cretans are liars, Epimenides is himself a liar, and what he says is untrue. Thus we may go on alternately proving that Epimenides and the Cretans are truthful and untruthful.\"", "title": "Logical paradox" }, { "paragraph_id": 5, "text": "If we assume the statement is false and that Epimenides is lying about all Cretans being liars, then there must exist at least one Cretan who is honest. This does not lead to a contradiction since it is not required that this Cretan be Epimenides. This means that Epimenides can say the false statement that all Cretans are liars while knowing at least one honest Cretan and lying about this particular Cretan. Hence, from the assumption that the statement is false, it does not follow that the statement is true. So we can avoid a paradox as seeing the statement \"all Cretans are liars\" as a false statement, which is made by a lying Cretan, Epimenides. The mistake made by Thomas Fowler (and many other people) above is to think that the negation of \"all Cretans are liars\" is \"all Cretans are honest\" (a paradox) when in fact the negation is \"there exists a Cretan who is honest\", or \"not all Cretans are liars\". The Epimenides paradox can be slightly modified as to not allow the kind of solution described above, as it was in the first paradox of Eubulides but instead leading to a non-avoidable self-contradiction. Paradoxical versions of the Epimenides problem are closely related to a class of more difficult logical problems, including the liar paradox, Socratic paradox and the Burali-Forti paradox, all of which have self-reference in common with Epimenides. The Epimenides paradox is usually classified as a variation on the liar paradox, and sometimes the two are not distinguished. The study of self-reference led to important developments in logic and mathematics in the twentieth century.", "title": "Logical paradox" }, { "paragraph_id": 6, "text": "In other words, it is not a paradox once one realizes \"All Cretans are liars\" being untrue only means \"Not all Cretans are liars\" instead of the assumption that \"All Cretans are honest\".", "title": "Logical paradox" }, { "paragraph_id": 7, "text": "Perhaps better put, for \"All Cretans are liars\" to be a true statement, it does not mean that all Cretans must lie all the time. In fact, Cretans could tell the truth quite often, but still all be liars in the sense that liars are people prone to deception for dishonest gain. Considering that \"All Cretans are liars\" has been seen as a paradox only since the 19th century, this seems to resolve the alleged paradox. If 'all Cretans are continuous liars' is actually true, then asking a Cretan if they are honest would always elicit the dishonest answer 'yes'. So arguably the original proposition is not so much paradoxical as invalid.", "title": "Logical paradox" }, { "paragraph_id": 8, "text": "A contextual reading of the contradiction may also provide an answer to the paradox. The original phrase, \"The Cretans, always liars, evil beasts, idle bellies!\" asserts not an intrinsic paradox, but rather an opinion of the Cretans from Epimenides. A stereotyping of his people not intended to be an absolute statement about the people as a whole. Rather it is a claim made about their position regarding their religious beliefs and socio-cultural attitudes. Within the context of his poem the phrase is specific to a certain belief, a context that Callimachus repeats in his poem regarding Zeus. Further, a more poignant answer to the paradox is simply that to be a liar is to state falsehoods, nothing in the statement asserts everything said is false, but rather they're \"always\" lying. This is not an absolute statement of fact and thus we cannot conclude there's a true contradiction made by Epimenides with this statement.", "title": "Logical paradox" }, { "paragraph_id": 9, "text": "Epimenides was a 6th-century BC philosopher and religious prophet who, against the general sentiment of Crete, proposed that Zeus was immortal, as in the following poem:", "title": "Origin of the phrase" }, { "paragraph_id": 10, "text": "They fashioned a tomb for thee, O holy and high oneThe Cretans, always liars, evil beasts, idle bellies!But thou art not dead: thou livest and abidest forever,For in thee we live and move and have our being.", "title": "Origin of the phrase" }, { "paragraph_id": 11, "text": "Denying the immortality of Zeus, then, was the lie of the Cretans.", "title": "Origin of the phrase" }, { "paragraph_id": 12, "text": "The phrase \"Cretans, always liars\" was quoted by the poet Callimachus in his Hymn to Zeus, with the same theological intent as Epimenides:", "title": "Origin of the phrase" }, { "paragraph_id": 13, "text": "O Zeus, some say that thou wert born on the hills of Ida; Others, O Zeus, say in Arcadia; Did these or those, O Father lie? -- \"Cretans are ever liars\". Yea, a tomb, O Lord, for thee the Cretans builded; But thou didst not die, for thou art for ever.", "title": "Origin of the phrase" }, { "paragraph_id": 14, "text": "The logical inconsistency of a Cretan asserting all Cretans are always liars may not have occurred to Epimenides, nor to Callimachus, who both used the phrase to emphasize their point, without irony, perhaps meaning that all Cretans lie routinely, but not exclusively.", "title": "Emergence as a logical contradiction" }, { "paragraph_id": 15, "text": "In the 1st century AD, the quote is mentioned by the author of the Epistle to Titus as having been spoken truly by \"one of their own prophets.\"", "title": "Emergence as a logical contradiction" }, { "paragraph_id": 16, "text": "\"One of Crete's own prophets has said it: 'Cretans are always liars, evil brutes, idle bellies'.He has surely told the truth. For this reason correct them sternly, that they may be sound in faith instead of paying attention to Jewish fables and to commandments of people who turn their backs on the truth.\"", "title": "Emergence as a logical contradiction" }, { "paragraph_id": 17, "text": "Clement of Alexandria, in the late 2nd century AD, fails to indicate that the concept of logical paradox is an issue:", "title": "Emergence as a logical contradiction" }, { "paragraph_id": 18, "text": "In his epistle to Titus, Apostle Paul wants to warn Titus that Cretans don't believe in the one truth of Christianity, because \"Cretans are always liars\". To justify his claim, Apostle Paul cites Epimenides.", "title": "Emergence as a logical contradiction" }, { "paragraph_id": 19, "text": "During the early 4th century, Saint Augustine restates the closely related liar paradox in Against the Academicians (III.13.29), but without mentioning Epimenides.", "title": "Emergence as a logical contradiction" }, { "paragraph_id": 20, "text": "In the Middle Ages, many forms of the liar paradox were studied under the heading of insolubilia, but these were not explicitly associated with Epimenides.", "title": "Emergence as a logical contradiction" }, { "paragraph_id": 21, "text": "Finally, in 1740, the second volume of Pierre Bayle's Dictionnaire Historique et Critique explicitly connects Epimenides with the paradox, though Bayle labels the paradox a \"sophisme\".", "title": "Emergence as a logical contradiction" }, { "paragraph_id": 22, "text": "All of the works of Epimenides are now lost, and known only through quotations by other authors. The quotation from the Cretica of Epimenides is given by R.N. Longenecker, \"Acts of the Apostles\", in volume 9 of The Expositor's Bible Commentary, Frank E. Gaebelein, editor (Grand Rapids, Michigan: Zondervan Corporation, 1976–1984), page 476. Longenecker in turn cites M.D. Gibson, Horae Semiticae X (Cambridge: Cambridge University Press, 1913), page 40, \"in Syriac\". Longenecker states the following in a footnote:", "title": "References by other authors" }, { "paragraph_id": 23, "text": "The Syr. version of the quatrain comes to us from the Syr. church father Isho'dad of Merv (probably based on the work of Theodore of Mopsuestia), which J.R. Harris translated back into Gr. in Exp [\"The Expositor\"] 7 (1907), p 336.", "title": "References by other authors" }, { "paragraph_id": 24, "text": "An oblique reference to Epimenides in the context of logic appears in \"The Logical Calculus\" by W. E. Johnson, Mind (New Series), volume 1, number 2 (April, 1892), pages 235–250. Johnson writes in a footnote,", "title": "References by other authors" }, { "paragraph_id": 25, "text": "Compare, for example, such occasions for fallacy as are supplied by \"Epimenides is a liar\" or \"That surface is red,\" which may be resolved into \"All or some statements of Epimenides are false,\" \"All or some of the surface is red.\"", "title": "References by other authors" }, { "paragraph_id": 26, "text": "The Epimenides paradox appears explicitly in \"Mathematical Logic as Based on the Theory of Types\", by Bertrand Russell, in the American Journal of Mathematics, volume 30, number 3 (July, 1908), pages 222–262, which opens with the following:", "title": "References by other authors" }, { "paragraph_id": 27, "text": "The oldest contradiction of the kind in question is the Epimenides. Epimenides the Cretan said that all Cretans were liars, and all other statements made by Cretans were certainly lies. Was this a lie?", "title": "References by other authors" }, { "paragraph_id": 28, "text": "In that article, Russell uses the Epimenides paradox as the point of departure for discussions of other problems, including the Burali-Forti paradox and the paradox now called Russell's paradox. Since Russell, the Epimenides paradox has been referenced repeatedly in logic. Typical of these references is Gödel, Escher, Bach by Douglas Hofstadter, which accords the paradox a prominent place in a discussion of self-reference.", "title": "References by other authors" }, { "paragraph_id": 29, "text": "It is also believed that the \"Cretan tales\" told by Odysseus in The Odyssey by Homer are a reference to this paradox.", "title": "References by other authors" } ]
The Epimenides paradox reveals a problem with self-reference in logic. It is named after the Cretan philosopher Epimenides of Knossos who is credited with the original statement. A typical description of the problem is given in the book Gödel, Escher, Bach, by Douglas Hofstadter: A paradox of self-reference arises when one considers whether it is possible for Epimenides to have spoken the truth.
2001-09-24T18:17:04Z
2023-10-16T03:29:09Z
[ "Template:Use dmy dates", "Template:Notelist", "Template:Reflist", "Template:Cite book", "Template:Short description", "Template:Blockquote", "Template:Quotation", "Template:Quote", "Template:Cite web", "Template:Cite IEP", "Template:Paradoxes" ]
https://en.wikipedia.org/wiki/Epimenides_paradox
9,640
Engine
An engine or motor is a machine designed to convert one or more forms of energy into mechanical energy. Available energy sources include potential energy (e.g. energy of the Earth's gravitational field as exploited in hydroelectric power generation), heat energy (e.g. geothermal), chemical energy, electric potential and nuclear energy (from nuclear fission or nuclear fusion). Many of these processes generate heat as an intermediate energy form, so heat engines have special importance. Some natural processes, such as atmospheric convection cells convert environmental heat into motion (e.g. in the form of rising air currents). Mechanical energy is of particular importance in transportation, but also plays a role in many industrial processes such as cutting, grinding, crushing, and mixing. Mechanical heat engines convert heat into work via various thermodynamic processes. The internal combustion engine is perhaps the most common example of a mechanical heat engine, in which heat from the combustion of a fuel causes rapid pressurisation of the gaseous combustion products in the combustion chamber, causing them to expand and drive a piston, which turns a crankshaft. Unlike internal combustion engines, a reaction engine (such as a jet engine) produces thrust by expelling reaction mass, in accordance with Newton's third law of motion. Apart from heat engines, electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air, and clockwork motors in wind-up toys use elastic energy. In biological systems, molecular motors, like myosins in muscles, use chemical energy to create forces and ultimately motion (a chemical engine, but not a heat engine). Chemical heat engines which employ air (ambient atmospheric gas) as a part of the fuel reaction are regarded as airbreathing engines. Chemical heat engines designed to operate outside of Earth's atmosphere (e.g. rockets, deeply submerged submarines) need to carry an additional fuel component called the oxidizer (although there exist super-oxidizers suitable for use in rockets, such as fluorine, a more powerful oxidant than oxygen itself); or the application needs to obtain heat by non-chemical means, such as by means of nuclear reactions. All chemically fueled heat engines emit exhaust gases. The cleanest engines emit water only. Strict zero-emissions generally means zero emissions other than water and water vapour. Only heat engines which combust pure hydrogen (fuel) and pure oxygen (oxidizer) achieve zero-emission by a strict definition (in practice, one type of rocket engine). If hydrogen is burnt in combination with air (all airbreathing engines), a side reaction occurs between atmospheric oxygen and atmospheric nitrogen resulting in small emissions of NOx, which is adverse even in small quantities. If a hydrocarbon (such as alcohol or gasoline) is burnt as fuel, large quantities of CO2 are emitted, a potent greenhouse gas. Hydrogen and oxygen from air can be reacted into water by a fuel cell without side production of NOx, but this is an electrochemical engine not a heat engine. The word engine derives from Old French engin, from the Latin ingenium–the root of the word ingenious. Pre-industrial weapons of war, such as catapults, trebuchets and battering rams, were called siege engines, and knowledge of how to construct them was often treated as a military secret. The word gin, as in cotton gin, is short for engine. Most mechanical devices invented during the industrial revolution were described as engines—the steam engine being a notable example. However, the original steam engines, such as those by Thomas Savery, were not mechanical engines but pumps. In this manner, a fire engine in its original form was merely a water pump, with the engine being transported to the fire by horses. In modern usage, the term engine typically describes devices, like steam engines and internal combustion engines, that burn or otherwise consume fuel to perform mechanical work by exerting a torque or linear force (usually in the form of thrust). Devices converting heat energy into motion are commonly referred to simply as engines. Examples of engines which exert a torque include the familiar automobile gasoline and diesel engines, as well as turboshafts. Examples of engines which produce thrust include turbofans and rockets. When the internal combustion engine was invented, the term motor was initially used to distinguish it from the steam engine—which was in wide use at the time, powering locomotives and other vehicles such as steam rollers. The term motor derives from the Latin verb moto which means 'to set in motion', or 'maintain motion'. Thus a motor is a device that imparts motion. Motor and engine are interchangeable in standard English. In some engineering jargons, the two words have different meanings, in which engine is a device that burns or otherwise consumes fuel, changing its chemical composition, and a motor is a device driven by electricity, air, or hydraulic pressure, which does not change the chemical composition of its energy source. However, rocketry uses the term rocket motor, even though they consume fuel. A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by an internal combustion engine may make use of various motors and pumps, but ultimately all such devices derive their power from the engine. Another way of looking at it is that a motor receives power from an external source, and then converts it into mechanical energy, while an engine creates power from pressure (derived directly from the explosive force of combustion or other chemical reaction, or secondarily from the action of some such force on other substances such as air, water, or steam). Simple machines, such as the club and oar (examples of the lever), are prehistoric. More complex engines using human power, animal power, water power, wind power and even steam power date back to antiquity. Human power was focused by the use of simple engines, such as the capstan, windlass or treadmill, and with ropes, pulleys, and block and tackle arrangements; this power was transmitted usually with the forces multiplied and the speed reduced. These were used in cranes and aboard ships in Ancient Greece, as well as in mines, water pumps and siege engines in Ancient Rome. The writers of those times, including Vitruvius, Frontinus and Pliny the Elder, treat these engines as commonplace, so their invention may be more ancient. By the 1st century AD, cattle and horses were used in mills, driving machines similar to those powered by humans in earlier times. According to Strabo, a water-powered mill was built in Kaberia of the kingdom of Mithridates during the 1st century BC. Use of water wheels in mills spread throughout the Roman Empire over the next few centuries. Some were quite complex, with aqueducts, dams, and sluices to maintain and channel the water, along with systems of gears, or toothed-wheels made of wood and metal to regulate the speed of rotation. More sophisticated small devices, such as the Antikythera Mechanism used complex trains of gears and dials to act as calendars or predict astronomical events. In a poem by Ausonius in the 4th century AD, he mentions a stone-cutting saw powered by water. Hero of Alexandria is credited with many such wind and steam powered machines in the 1st century AD, including the Aeolipile and the vending machine, often these machines were associated with worship, such as animated altars and automated temple doors. Medieval Muslim engineers employed gears in mills and water-raising machines, and used dams as a source of water power to provide additional power to watermills and water-raising machines. In the medieval Islamic world, such advances made it possible to mechanize many industrial tasks previously carried out by manual labour. In 1206, al-Jazari employed a crank-conrod system for two of his water-raising machines. A rudimentary steam turbine device was described by Taqi al-Din in 1551 and by Giovanni Branca in 1629. In the 13th century, the solid rocket motor was invented in China. Driven by gunpowder, this simplest form of internal combustion engine was unable to deliver sustained power, but was useful for propelling weaponry at high speeds towards enemies in battle and for fireworks. After invention, this innovation spread throughout Europe. The Watt steam engine was the first type of steam engine to make use of steam at a pressure just above atmospheric to drive the piston helped by a partial vacuum. Improving on the design of the 1712 Newcomen steam engine, the Watt steam engine, developed sporadically from 1763 to 1775, was a great step in the development of the steam engine. Offering a dramatic increase in fuel efficiency, James Watt's design became synonymous with steam engines, due in no small part to his business partner, Matthew Boulton. It enabled rapid development of efficient semi-automated factories on a previously unimaginable scale in places where waterpower was not available. Later development led to steam locomotives and great expansion of railway transportation. As for internal combustion piston engines, these were tested in France in 1807 by de Rivaz and independently, by the Niépce brothers. They were theoretically advanced by Carnot in 1824. In 1853–57 Eugenio Barsanti and Felice Matteucci invented and patented an engine using the free-piston principle that was possibly the first 4-cycle engine. The invention of an internal combustion engine which was later commercially successful was made during 1860 by Etienne Lenoir. In 1877, the Otto cycle was capable of giving a far higher power-to-weight ratio than steam engines and worked much better for many transportation applications such as cars and aircraft. The first commercially successful automobile, created by Karl Benz, added to the interest in light and powerful engines. The lightweight gasoline internal combustion engine, operating on a four-stroke Otto cycle, has been the most successful for light automobiles, while the more efficient Diesel engine is used for trucks and buses. However, in recent years, turbo Diesel engines have become increasingly popular, especially outside of the United States, even for quite small cars. In 1896, Karl Benz was granted a patent for his design of the first engine with horizontally opposed pistons. His design created an engine in which the corresponding pistons move in horizontal cylinders and reach top dead center simultaneously, thus automatically balancing each other with respect to their individual momentum. Engines of this design are often referred to as flat engines because of their shape and lower profile. They were used in the Volkswagen Beetle, the Citroën 2CV, some Porsche and Subaru cars, many BMW and Honda motorcycles, and propeller aircraft engines. Continuance of the use of the internal combustion engine for automobiles is partly due to the improvement of engine control systems (onboard computers providing engine management processes, and electronically controlled fuel injection). Forced air induction by turbocharging and supercharging have increased power outputs and engine efficiencies. Similar changes have been applied to smaller diesel engines giving them almost the same power characteristics as gasoline engines. This is especially evident with the popularity of smaller diesel engine propelled cars in Europe. Larger diesel engines are still often used in trucks and heavy machinery, although they require special machining not available in most factories. Diesel engines produce lower hydrocarbon and CO2 emissions, but greater particulate and NOx pollution, than gasoline engines. Diesel engines are also 40% more fuel efficient than comparable gasoline engines. In the first half of the 20th century, a trend of increasing engine power occurred, particularly in the U.S models. Design changes incorporated all known methods of increasing engine capacity, including increasing the pressure in the cylinders to improve efficiency, increasing the size of the engine, and increasing the rate at which the engine produces work. The higher forces and pressures created by these changes created engine vibration and size problems that led to stiffer, more compact engines with V and opposed cylinder layouts replacing longer straight-line arrangements. Optimal combustion efficiency in passenger vehicles is reached with a coolant temperature of around 110 °C (230 °F). Earlier automobile engine development produced a much larger range of engines than is in common use today. Engines have ranged from 1- to 16-cylinder designs with corresponding differences in overall size, weight, engine displacement, and cylinder bores. Four cylinders and power ratings from 19 to 120 hp (14 to 90 kW) were followed in a majority of the models. Several three-cylinder, two-stroke-cycle models were built while most engines had straight or in-line cylinders. There were several V-type models and horizontally opposed two- and four-cylinder makes too. Overhead camshafts were frequently employed. The smaller engines were commonly air-cooled and located at the rear of the vehicle; compression ratios were relatively low. The 1970s and 1980s saw an increased interest in improved fuel economy, which caused a return to smaller V-6 and four-cylinder layouts, with as many as five valves per cylinder to improve efficiency. The Bugatti Veyron 16.4 operates with a W16 engine, meaning that two V8 cylinder layouts are positioned next to each other to create the W shape sharing the same crankshaft. The largest internal combustion engine ever built is the Wärtsilä-Sulzer RTA96-C, a 14-cylinder, 2-stroke turbocharged diesel engine that was designed to power the Emma Mærsk, the largest container ship in the world when launched in 2006. This engine has a mass of 2,300 tonnes, and when running at 102 rpm (1.7 Hz) produces over 80 MW, and can use up to 250 tonnes of fuel per day. An engine can be put into a category according to two criteria: the form of energy it accepts in order to create motion, and the type of motion it outputs. Combustion engines are heat engines driven by the heat of a combustion process. The internal combustion engine is an engine in which the combustion of a fuel (generally, fossil fuel) occurs with an oxidizer (usually air) in a combustion chamber. In an internal combustion engine the expansion of the high temperature and high pressure gases, which are produced by the combustion, directly applies force to components of the engine, such as the pistons or turbine blades or a nozzle, and by moving it over a distance, generates mechanical work. An external combustion engine (EC engine) is a heat engine where an internal working fluid is heated by combustion of an external source, through the engine wall or a heat exchanger. The fluid then, by expanding and acting on the mechanism of the engine produces motion and usable work. The fluid is then cooled, compressed and reused (closed cycle), or (less commonly) dumped, and cool fluid pulled in (open cycle air engine). "Combustion" refers to burning fuel with an oxidizer, to supply the heat. Engines of similar (or even identical) configuration and operation may use a supply of heat from other sources such as nuclear, solar, geothermal or exothermic reactions not involving combustion; but are not then strictly classed as external combustion engines, but as external thermal engines. The working fluid can be a gas as in a Stirling engine, or steam as in a steam engine or an organic liquid such as n-pentane in an Organic Rankine cycle. The fluid can be of any composition; gas is by far the most common, although even single-phase liquid is sometimes used. In the case of the steam engine, the fluid changes phases between liquid and gas. Air-breathing combustion engines are combustion engines that use the oxygen in atmospheric air to oxidise ('burn') the fuel, rather than carrying an oxidiser, as in a rocket. Theoretically, this should result in a better specific impulse than for rocket engines. A continuous stream of air flows through the air-breathing engine. This air is compressed, mixed with fuel, ignited and expelled as the exhaust gas. In reaction engines, the majority of the combustion energy (heat) exits the engine as exhaust gas, which provides thrust directly. Typical air-breathing engines include: The operation of engines typically has a negative impact upon air quality and ambient sound levels. There has been a growing emphasis on the pollution producing features of automotive power systems. This has created new interest in alternate power sources and internal-combustion engine refinements. Though a few limited-production battery-powered electric vehicles have appeared, they have not proved competitive owing to costs and operating characteristics. In the 21st century the diesel engine has been increasing in popularity with automobile owners. However, the gasoline engine and the Diesel engine, with their new emission-control devices to improve emission performance, have not yet been significantly challenged. A number of manufacturers have introduced hybrid engines, mainly involving a small gasoline engine coupled with an electric motor and with a large battery bank, these are starting to become a popular option because of their environment awareness. Exhaust gas from a spark ignition engine consists of the following: nitrogen 70 to 75% (by volume), water vapor 10 to 12%, carbon dioxide 10 to 13.5%, hydrogen 0.5 to 2%, oxygen 0.2 to 2%, carbon monoxide: 0.1 to 6%, unburnt hydrocarbons and partial oxidation products (e.g. aldehydes) 0.5 to 1%, nitrogen monoxide 0.01 to 0.4%, nitrous oxide <100 ppm, sulfur dioxide 15 to 60 ppm, traces of other compounds such as fuel additives and lubricants, also halogen and metallic compounds, and other particles. Carbon monoxide is highly toxic, and can cause carbon monoxide poisoning, so it is important to avoid any build-up of the gas in a confined space. Catalytic converters can reduce toxic emissions, but not eliminate them. Also, resulting greenhouse gas emissions, chiefly carbon dioxide, from the widespread use of engines in the modern industrialized world is contributing to the global greenhouse effect – a primary concern regarding global warming. Some engines convert heat from noncombustive processes into mechanical work, for example a nuclear power plant uses the heat from the nuclear reaction to produce steam and drive a steam engine, or a gas turbine in a rocket engine may be driven by decomposing hydrogen peroxide. Apart from the different energy source, the engine is often engineered much the same as an internal or external combustion engine. Another group of noncombustive engines includes thermoacoustic heat engines (sometimes called "TA engines") which are thermoacoustic devices that use high-amplitude sound waves to pump heat from one place to another, or conversely use a heat difference to induce high-amplitude sound waves. In general, thermoacoustic engines can be divided into standing wave and travelling wave devices. Stirling engines can be another form of non-combustive heat engine. They use the Stirling thermodynamic cycle to convert heat into work. An example is the alpha type Stirling engine, whereby gas flows, via a recuperator, between a hot cylinder and a cold cylinder, which are attached to reciprocating pistons 90° out of phase. The gas receives heat at the hot cylinder and expands, driving the piston that turns the crankshaft. After expanding and flowing through the recuperator, the gas rejects heat at the cold cylinder and the ensuing pressure drop leads to its compression by the other (displacement) piston, which forces it back to the hot cylinder. Non-thermal motors usually are powered by a chemical reaction, but are not heat engines. Examples include: An electric motor uses electrical energy to produce mechanical energy, usually through the interaction of magnetic fields and current-carrying conductors. The reverse process, producing electrical energy from mechanical energy, is accomplished by a generator or dynamo. Traction motors used on vehicles often perform both tasks. Electric motors can be run as generators and vice versa, although this is not always practical. Electric motors are ubiquitous, being found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools, and disk drives. They may be powered by direct current (for example a battery powered portable device or motor vehicle), or by alternating current from a central electrical distribution grid. The smallest motors may be found in electric wristwatches. Medium-size motors of highly standardized dimensions and characteristics provide convenient mechanical power for industrial uses. The very largest electric motors are used for propulsion of large ships, and for such purposes as pipeline compressors, with ratings in the thousands of kilowatts. Electric motors may be classified by the source of electric power, by their internal construction, and by their application. The physical principle of production of mechanical force by the interactions of an electric current and a magnetic field was known as early as 1821. Electric motors of increasing efficiency were constructed throughout the 19th century, but commercial exploitation of electric motors on a large scale required efficient electrical generators and electrical distribution networks. To reduce the electric energy consumption from motors and their associated carbon footprints, various regulatory authorities in many countries have introduced and implemented legislation to encourage the manufacture and use of higher efficiency electric motors. A well-designed motor can convert over 90% of its input energy into useful power for decades. When the efficiency of a motor is raised by even a few percentage points, the savings, in kilowatt hours (and therefore in cost), are enormous. The electrical energy efficiency of a typical industrial induction motor can be improved by: 1) reducing the electrical losses in the stator windings (e.g., by increasing the cross-sectional area of the conductor, improving the winding technique, and using materials with higher electrical conductivities, such as copper), 2) reducing the electrical losses in the rotor coil or casting (e.g., by using materials with higher electrical conductivities, such as copper), 3) reducing magnetic losses by using better quality magnetic steel, 4) improving the aerodynamics of motors to reduce mechanical windage losses, 5) improving bearings to reduce friction losses, and 6) minimizing manufacturing tolerances. For further discussion on this subject, see Premium efficiency). By convention, electric engine refers to a railroad electric locomotive, rather than an electric motor. Some motors are powered by potential or kinetic energy, for example some funiculars, gravity plane and ropeway conveyors have used the energy from moving water or rocks, and some clocks have a weight that falls under gravity. Other forms of potential energy include compressed gases (such as pneumatic motors), springs (clockwork motors) and elastic bands. Historic military siege engines included large catapults, trebuchets, and (to some extent) battering rams were powered by potential energy. A pneumatic motor is a machine that converts potential energy in the form of compressed air into mechanical work. Pneumatic motors generally convert the compressed air to mechanical work through either linear or rotary motion. Linear motion can come from either a diaphragm or piston actuator, while rotary motion is supplied by either a vane type air motor or piston air motor. Pneumatic motors have found widespread success in the hand-held tool industry and continual attempts are being made to expand their use to the transportation industry. However, pneumatic motors must overcome efficiency deficiencies before being seen as a viable option in the transportation industry. A hydraulic motor derives its power from a pressurized liquid. This type of engine is used to move heavy loads and drive machinery. Some motor units can have multiple sources of energy. For example, a plug-in hybrid electric vehicle's electric motor could source electricity from either a battery or from fossil fuels inputs via an internal combustion engine and a generator. The following are used in the assessment of the performance of an engine. Speed refers to crankshaft rotation in piston engines and the speed of compressor/turbine rotors and electric motor rotors. It is measured in revolutions per minute (rpm). Thrust is the force exerted on an airplane as a consequence of its propeller or jet engine accelerating the air passing through it. It is also the force exerted on a ship as a consequence of its propeller accelerating the water passing through it. Torque is a turning moment on a shaft and is calculated by multiplying the force causing the moment by its distance from the shaft. Power is the measure of how fast work is done. Efficiency is a measure of how much fuel is wasted in producing power. Vehicle noise is predominantly from the engine at low vehicle speeds and from tires and the air flowing past the vehicle at higher speeds. Electric motors are quieter than internal combustion engines. Thrust-producing engines, such as turbofans, turbojets and rockets emit the greatest amount of noise due to the way their thrust-producing, high-velocity exhaust streams interact with the surrounding stationary air. Noise reduction technology includes intake and exhaust system mufflers (silencers) on gasoline and diesel engines and noise attenuation liners in turbofan inlets. Particularly notable kinds of engines include:
[ { "paragraph_id": 0, "text": "An engine or motor is a machine designed to convert one or more forms of energy into mechanical energy.", "title": "" }, { "paragraph_id": 1, "text": "Available energy sources include potential energy (e.g. energy of the Earth's gravitational field as exploited in hydroelectric power generation), heat energy (e.g. geothermal), chemical energy, electric potential and nuclear energy (from nuclear fission or nuclear fusion). Many of these processes generate heat as an intermediate energy form, so heat engines have special importance. Some natural processes, such as atmospheric convection cells convert environmental heat into motion (e.g. in the form of rising air currents). Mechanical energy is of particular importance in transportation, but also plays a role in many industrial processes such as cutting, grinding, crushing, and mixing.", "title": "" }, { "paragraph_id": 2, "text": "Mechanical heat engines convert heat into work via various thermodynamic processes. The internal combustion engine is perhaps the most common example of a mechanical heat engine, in which heat from the combustion of a fuel causes rapid pressurisation of the gaseous combustion products in the combustion chamber, causing them to expand and drive a piston, which turns a crankshaft. Unlike internal combustion engines, a reaction engine (such as a jet engine) produces thrust by expelling reaction mass, in accordance with Newton's third law of motion.", "title": "" }, { "paragraph_id": 3, "text": "Apart from heat engines, electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air, and clockwork motors in wind-up toys use elastic energy. In biological systems, molecular motors, like myosins in muscles, use chemical energy to create forces and ultimately motion (a chemical engine, but not a heat engine).", "title": "" }, { "paragraph_id": 4, "text": "Chemical heat engines which employ air (ambient atmospheric gas) as a part of the fuel reaction are regarded as airbreathing engines. Chemical heat engines designed to operate outside of Earth's atmosphere (e.g. rockets, deeply submerged submarines) need to carry an additional fuel component called the oxidizer (although there exist super-oxidizers suitable for use in rockets, such as fluorine, a more powerful oxidant than oxygen itself); or the application needs to obtain heat by non-chemical means, such as by means of nuclear reactions.", "title": "" }, { "paragraph_id": 5, "text": "All chemically fueled heat engines emit exhaust gases. The cleanest engines emit water only. Strict zero-emissions generally means zero emissions other than water and water vapour. Only heat engines which combust pure hydrogen (fuel) and pure oxygen (oxidizer) achieve zero-emission by a strict definition (in practice, one type of rocket engine). If hydrogen is burnt in combination with air (all airbreathing engines), a side reaction occurs between atmospheric oxygen and atmospheric nitrogen resulting in small emissions of NOx, which is adverse even in small quantities. If a hydrocarbon (such as alcohol or gasoline) is burnt as fuel, large quantities of CO2 are emitted, a potent greenhouse gas. Hydrogen and oxygen from air can be reacted into water by a fuel cell without side production of NOx, but this is an electrochemical engine not a heat engine.", "title": "Emission/Byproducts" }, { "paragraph_id": 6, "text": "The word engine derives from Old French engin, from the Latin ingenium–the root of the word ingenious. Pre-industrial weapons of war, such as catapults, trebuchets and battering rams, were called siege engines, and knowledge of how to construct them was often treated as a military secret. The word gin, as in cotton gin, is short for engine. Most mechanical devices invented during the industrial revolution were described as engines—the steam engine being a notable example. However, the original steam engines, such as those by Thomas Savery, were not mechanical engines but pumps. In this manner, a fire engine in its original form was merely a water pump, with the engine being transported to the fire by horses.", "title": "Terminology" }, { "paragraph_id": 7, "text": "In modern usage, the term engine typically describes devices, like steam engines and internal combustion engines, that burn or otherwise consume fuel to perform mechanical work by exerting a torque or linear force (usually in the form of thrust). Devices converting heat energy into motion are commonly referred to simply as engines. Examples of engines which exert a torque include the familiar automobile gasoline and diesel engines, as well as turboshafts. Examples of engines which produce thrust include turbofans and rockets.", "title": "Terminology" }, { "paragraph_id": 8, "text": "When the internal combustion engine was invented, the term motor was initially used to distinguish it from the steam engine—which was in wide use at the time, powering locomotives and other vehicles such as steam rollers. The term motor derives from the Latin verb moto which means 'to set in motion', or 'maintain motion'. Thus a motor is a device that imparts motion.", "title": "Terminology" }, { "paragraph_id": 9, "text": "Motor and engine are interchangeable in standard English. In some engineering jargons, the two words have different meanings, in which engine is a device that burns or otherwise consumes fuel, changing its chemical composition, and a motor is a device driven by electricity, air, or hydraulic pressure, which does not change the chemical composition of its energy source. However, rocketry uses the term rocket motor, even though they consume fuel.", "title": "Terminology" }, { "paragraph_id": 10, "text": "A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by an internal combustion engine may make use of various motors and pumps, but ultimately all such devices derive their power from the engine. Another way of looking at it is that a motor receives power from an external source, and then converts it into mechanical energy, while an engine creates power from pressure (derived directly from the explosive force of combustion or other chemical reaction, or secondarily from the action of some such force on other substances such as air, water, or steam).", "title": "Terminology" }, { "paragraph_id": 11, "text": "Simple machines, such as the club and oar (examples of the lever), are prehistoric. More complex engines using human power, animal power, water power, wind power and even steam power date back to antiquity. Human power was focused by the use of simple engines, such as the capstan, windlass or treadmill, and with ropes, pulleys, and block and tackle arrangements; this power was transmitted usually with the forces multiplied and the speed reduced. These were used in cranes and aboard ships in Ancient Greece, as well as in mines, water pumps and siege engines in Ancient Rome. The writers of those times, including Vitruvius, Frontinus and Pliny the Elder, treat these engines as commonplace, so their invention may be more ancient. By the 1st century AD, cattle and horses were used in mills, driving machines similar to those powered by humans in earlier times.", "title": "History" }, { "paragraph_id": 12, "text": "According to Strabo, a water-powered mill was built in Kaberia of the kingdom of Mithridates during the 1st century BC. Use of water wheels in mills spread throughout the Roman Empire over the next few centuries. Some were quite complex, with aqueducts, dams, and sluices to maintain and channel the water, along with systems of gears, or toothed-wheels made of wood and metal to regulate the speed of rotation. More sophisticated small devices, such as the Antikythera Mechanism used complex trains of gears and dials to act as calendars or predict astronomical events. In a poem by Ausonius in the 4th century AD, he mentions a stone-cutting saw powered by water. Hero of Alexandria is credited with many such wind and steam powered machines in the 1st century AD, including the Aeolipile and the vending machine, often these machines were associated with worship, such as animated altars and automated temple doors.", "title": "History" }, { "paragraph_id": 13, "text": "Medieval Muslim engineers employed gears in mills and water-raising machines, and used dams as a source of water power to provide additional power to watermills and water-raising machines. In the medieval Islamic world, such advances made it possible to mechanize many industrial tasks previously carried out by manual labour.", "title": "History" }, { "paragraph_id": 14, "text": "In 1206, al-Jazari employed a crank-conrod system for two of his water-raising machines. A rudimentary steam turbine device was described by Taqi al-Din in 1551 and by Giovanni Branca in 1629.", "title": "History" }, { "paragraph_id": 15, "text": "In the 13th century, the solid rocket motor was invented in China. Driven by gunpowder, this simplest form of internal combustion engine was unable to deliver sustained power, but was useful for propelling weaponry at high speeds towards enemies in battle and for fireworks. After invention, this innovation spread throughout Europe.", "title": "History" }, { "paragraph_id": 16, "text": "The Watt steam engine was the first type of steam engine to make use of steam at a pressure just above atmospheric to drive the piston helped by a partial vacuum. Improving on the design of the 1712 Newcomen steam engine, the Watt steam engine, developed sporadically from 1763 to 1775, was a great step in the development of the steam engine. Offering a dramatic increase in fuel efficiency, James Watt's design became synonymous with steam engines, due in no small part to his business partner, Matthew Boulton. It enabled rapid development of efficient semi-automated factories on a previously unimaginable scale in places where waterpower was not available. Later development led to steam locomotives and great expansion of railway transportation.", "title": "History" }, { "paragraph_id": 17, "text": "As for internal combustion piston engines, these were tested in France in 1807 by de Rivaz and independently, by the Niépce brothers. They were theoretically advanced by Carnot in 1824. In 1853–57 Eugenio Barsanti and Felice Matteucci invented and patented an engine using the free-piston principle that was possibly the first 4-cycle engine.", "title": "History" }, { "paragraph_id": 18, "text": "The invention of an internal combustion engine which was later commercially successful was made during 1860 by Etienne Lenoir.", "title": "History" }, { "paragraph_id": 19, "text": "In 1877, the Otto cycle was capable of giving a far higher power-to-weight ratio than steam engines and worked much better for many transportation applications such as cars and aircraft.", "title": "History" }, { "paragraph_id": 20, "text": "The first commercially successful automobile, created by Karl Benz, added to the interest in light and powerful engines. The lightweight gasoline internal combustion engine, operating on a four-stroke Otto cycle, has been the most successful for light automobiles, while the more efficient Diesel engine is used for trucks and buses. However, in recent years, turbo Diesel engines have become increasingly popular, especially outside of the United States, even for quite small cars.", "title": "History" }, { "paragraph_id": 21, "text": "In 1896, Karl Benz was granted a patent for his design of the first engine with horizontally opposed pistons. His design created an engine in which the corresponding pistons move in horizontal cylinders and reach top dead center simultaneously, thus automatically balancing each other with respect to their individual momentum. Engines of this design are often referred to as flat engines because of their shape and lower profile. They were used in the Volkswagen Beetle, the Citroën 2CV, some Porsche and Subaru cars, many BMW and Honda motorcycles, and propeller aircraft engines.", "title": "History" }, { "paragraph_id": 22, "text": "Continuance of the use of the internal combustion engine for automobiles is partly due to the improvement of engine control systems (onboard computers providing engine management processes, and electronically controlled fuel injection). Forced air induction by turbocharging and supercharging have increased power outputs and engine efficiencies. Similar changes have been applied to smaller diesel engines giving them almost the same power characteristics as gasoline engines. This is especially evident with the popularity of smaller diesel engine propelled cars in Europe. Larger diesel engines are still often used in trucks and heavy machinery, although they require special machining not available in most factories. Diesel engines produce lower hydrocarbon and CO2 emissions, but greater particulate and NOx pollution, than gasoline engines. Diesel engines are also 40% more fuel efficient than comparable gasoline engines.", "title": "History" }, { "paragraph_id": 23, "text": "In the first half of the 20th century, a trend of increasing engine power occurred, particularly in the U.S models. Design changes incorporated all known methods of increasing engine capacity, including increasing the pressure in the cylinders to improve efficiency, increasing the size of the engine, and increasing the rate at which the engine produces work. The higher forces and pressures created by these changes created engine vibration and size problems that led to stiffer, more compact engines with V and opposed cylinder layouts replacing longer straight-line arrangements.", "title": "History" }, { "paragraph_id": 24, "text": "Optimal combustion efficiency in passenger vehicles is reached with a coolant temperature of around 110 °C (230 °F).", "title": "History" }, { "paragraph_id": 25, "text": "Earlier automobile engine development produced a much larger range of engines than is in common use today. Engines have ranged from 1- to 16-cylinder designs with corresponding differences in overall size, weight, engine displacement, and cylinder bores. Four cylinders and power ratings from 19 to 120 hp (14 to 90 kW) were followed in a majority of the models. Several three-cylinder, two-stroke-cycle models were built while most engines had straight or in-line cylinders. There were several V-type models and horizontally opposed two- and four-cylinder makes too. Overhead camshafts were frequently employed. The smaller engines were commonly air-cooled and located at the rear of the vehicle; compression ratios were relatively low. The 1970s and 1980s saw an increased interest in improved fuel economy, which caused a return to smaller V-6 and four-cylinder layouts, with as many as five valves per cylinder to improve efficiency. The Bugatti Veyron 16.4 operates with a W16 engine, meaning that two V8 cylinder layouts are positioned next to each other to create the W shape sharing the same crankshaft.", "title": "History" }, { "paragraph_id": 26, "text": "The largest internal combustion engine ever built is the Wärtsilä-Sulzer RTA96-C, a 14-cylinder, 2-stroke turbocharged diesel engine that was designed to power the Emma Mærsk, the largest container ship in the world when launched in 2006. This engine has a mass of 2,300 tonnes, and when running at 102 rpm (1.7 Hz) produces over 80 MW, and can use up to 250 tonnes of fuel per day.", "title": "History" }, { "paragraph_id": 27, "text": "An engine can be put into a category according to two criteria: the form of energy it accepts in order to create motion, and the type of motion it outputs.", "title": "Types" }, { "paragraph_id": 28, "text": "Combustion engines are heat engines driven by the heat of a combustion process.", "title": "Types" }, { "paragraph_id": 29, "text": "The internal combustion engine is an engine in which the combustion of a fuel (generally, fossil fuel) occurs with an oxidizer (usually air) in a combustion chamber. In an internal combustion engine the expansion of the high temperature and high pressure gases, which are produced by the combustion, directly applies force to components of the engine, such as the pistons or turbine blades or a nozzle, and by moving it over a distance, generates mechanical work.", "title": "Types" }, { "paragraph_id": 30, "text": "An external combustion engine (EC engine) is a heat engine where an internal working fluid is heated by combustion of an external source, through the engine wall or a heat exchanger. The fluid then, by expanding and acting on the mechanism of the engine produces motion and usable work. The fluid is then cooled, compressed and reused (closed cycle), or (less commonly) dumped, and cool fluid pulled in (open cycle air engine).", "title": "Types" }, { "paragraph_id": 31, "text": "\"Combustion\" refers to burning fuel with an oxidizer, to supply the heat. Engines of similar (or even identical) configuration and operation may use a supply of heat from other sources such as nuclear, solar, geothermal or exothermic reactions not involving combustion; but are not then strictly classed as external combustion engines, but as external thermal engines.", "title": "Types" }, { "paragraph_id": 32, "text": "The working fluid can be a gas as in a Stirling engine, or steam as in a steam engine or an organic liquid such as n-pentane in an Organic Rankine cycle. The fluid can be of any composition; gas is by far the most common, although even single-phase liquid is sometimes used. In the case of the steam engine, the fluid changes phases between liquid and gas.", "title": "Types" }, { "paragraph_id": 33, "text": "Air-breathing combustion engines are combustion engines that use the oxygen in atmospheric air to oxidise ('burn') the fuel, rather than carrying an oxidiser, as in a rocket. Theoretically, this should result in a better specific impulse than for rocket engines.", "title": "Types" }, { "paragraph_id": 34, "text": "A continuous stream of air flows through the air-breathing engine. This air is compressed, mixed with fuel, ignited and expelled as the exhaust gas. In reaction engines, the majority of the combustion energy (heat) exits the engine as exhaust gas, which provides thrust directly.", "title": "Types" }, { "paragraph_id": 35, "text": "Typical air-breathing engines include:", "title": "Types" }, { "paragraph_id": 36, "text": "The operation of engines typically has a negative impact upon air quality and ambient sound levels. There has been a growing emphasis on the pollution producing features of automotive power systems. This has created new interest in alternate power sources and internal-combustion engine refinements. Though a few limited-production battery-powered electric vehicles have appeared, they have not proved competitive owing to costs and operating characteristics. In the 21st century the diesel engine has been increasing in popularity with automobile owners. However, the gasoline engine and the Diesel engine, with their new emission-control devices to improve emission performance, have not yet been significantly challenged. A number of manufacturers have introduced hybrid engines, mainly involving a small gasoline engine coupled with an electric motor and with a large battery bank, these are starting to become a popular option because of their environment awareness.", "title": "Types" }, { "paragraph_id": 37, "text": "Exhaust gas from a spark ignition engine consists of the following: nitrogen 70 to 75% (by volume), water vapor 10 to 12%, carbon dioxide 10 to 13.5%, hydrogen 0.5 to 2%, oxygen 0.2 to 2%, carbon monoxide: 0.1 to 6%, unburnt hydrocarbons and partial oxidation products (e.g. aldehydes) 0.5 to 1%, nitrogen monoxide 0.01 to 0.4%, nitrous oxide <100 ppm, sulfur dioxide 15 to 60 ppm, traces of other compounds such as fuel additives and lubricants, also halogen and metallic compounds, and other particles. Carbon monoxide is highly toxic, and can cause carbon monoxide poisoning, so it is important to avoid any build-up of the gas in a confined space. Catalytic converters can reduce toxic emissions, but not eliminate them. Also, resulting greenhouse gas emissions, chiefly carbon dioxide, from the widespread use of engines in the modern industrialized world is contributing to the global greenhouse effect – a primary concern regarding global warming.", "title": "Types" }, { "paragraph_id": 38, "text": "Some engines convert heat from noncombustive processes into mechanical work, for example a nuclear power plant uses the heat from the nuclear reaction to produce steam and drive a steam engine, or a gas turbine in a rocket engine may be driven by decomposing hydrogen peroxide. Apart from the different energy source, the engine is often engineered much the same as an internal or external combustion engine.", "title": "Types" }, { "paragraph_id": 39, "text": "Another group of noncombustive engines includes thermoacoustic heat engines (sometimes called \"TA engines\") which are thermoacoustic devices that use high-amplitude sound waves to pump heat from one place to another, or conversely use a heat difference to induce high-amplitude sound waves. In general, thermoacoustic engines can be divided into standing wave and travelling wave devices.", "title": "Types" }, { "paragraph_id": 40, "text": "Stirling engines can be another form of non-combustive heat engine. They use the Stirling thermodynamic cycle to convert heat into work. An example is the alpha type Stirling engine, whereby gas flows, via a recuperator, between a hot cylinder and a cold cylinder, which are attached to reciprocating pistons 90° out of phase. The gas receives heat at the hot cylinder and expands, driving the piston that turns the crankshaft. After expanding and flowing through the recuperator, the gas rejects heat at the cold cylinder and the ensuing pressure drop leads to its compression by the other (displacement) piston, which forces it back to the hot cylinder.", "title": "Types" }, { "paragraph_id": 41, "text": "Non-thermal motors usually are powered by a chemical reaction, but are not heat engines. Examples include:", "title": "Types" }, { "paragraph_id": 42, "text": "An electric motor uses electrical energy to produce mechanical energy, usually through the interaction of magnetic fields and current-carrying conductors. The reverse process, producing electrical energy from mechanical energy, is accomplished by a generator or dynamo. Traction motors used on vehicles often perform both tasks. Electric motors can be run as generators and vice versa, although this is not always practical. Electric motors are ubiquitous, being found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools, and disk drives. They may be powered by direct current (for example a battery powered portable device or motor vehicle), or by alternating current from a central electrical distribution grid. The smallest motors may be found in electric wristwatches. Medium-size motors of highly standardized dimensions and characteristics provide convenient mechanical power for industrial uses. The very largest electric motors are used for propulsion of large ships, and for such purposes as pipeline compressors, with ratings in the thousands of kilowatts. Electric motors may be classified by the source of electric power, by their internal construction, and by their application.", "title": "Types" }, { "paragraph_id": 43, "text": "The physical principle of production of mechanical force by the interactions of an electric current and a magnetic field was known as early as 1821. Electric motors of increasing efficiency were constructed throughout the 19th century, but commercial exploitation of electric motors on a large scale required efficient electrical generators and electrical distribution networks.", "title": "Types" }, { "paragraph_id": 44, "text": "To reduce the electric energy consumption from motors and their associated carbon footprints, various regulatory authorities in many countries have introduced and implemented legislation to encourage the manufacture and use of higher efficiency electric motors. A well-designed motor can convert over 90% of its input energy into useful power for decades. When the efficiency of a motor is raised by even a few percentage points, the savings, in kilowatt hours (and therefore in cost), are enormous. The electrical energy efficiency of a typical industrial induction motor can be improved by: 1) reducing the electrical losses in the stator windings (e.g., by increasing the cross-sectional area of the conductor, improving the winding technique, and using materials with higher electrical conductivities, such as copper), 2) reducing the electrical losses in the rotor coil or casting (e.g., by using materials with higher electrical conductivities, such as copper), 3) reducing magnetic losses by using better quality magnetic steel, 4) improving the aerodynamics of motors to reduce mechanical windage losses, 5) improving bearings to reduce friction losses, and 6) minimizing manufacturing tolerances. For further discussion on this subject, see Premium efficiency).", "title": "Types" }, { "paragraph_id": 45, "text": "By convention, electric engine refers to a railroad electric locomotive, rather than an electric motor.", "title": "Types" }, { "paragraph_id": 46, "text": "Some motors are powered by potential or kinetic energy, for example some funiculars, gravity plane and ropeway conveyors have used the energy from moving water or rocks, and some clocks have a weight that falls under gravity. Other forms of potential energy include compressed gases (such as pneumatic motors), springs (clockwork motors) and elastic bands.", "title": "Types" }, { "paragraph_id": 47, "text": "Historic military siege engines included large catapults, trebuchets, and (to some extent) battering rams were powered by potential energy.", "title": "Types" }, { "paragraph_id": 48, "text": "A pneumatic motor is a machine that converts potential energy in the form of compressed air into mechanical work. Pneumatic motors generally convert the compressed air to mechanical work through either linear or rotary motion. Linear motion can come from either a diaphragm or piston actuator, while rotary motion is supplied by either a vane type air motor or piston air motor. Pneumatic motors have found widespread success in the hand-held tool industry and continual attempts are being made to expand their use to the transportation industry. However, pneumatic motors must overcome efficiency deficiencies before being seen as a viable option in the transportation industry.", "title": "Types" }, { "paragraph_id": 49, "text": "A hydraulic motor derives its power from a pressurized liquid. This type of engine is used to move heavy loads and drive machinery.", "title": "Types" }, { "paragraph_id": 50, "text": "Some motor units can have multiple sources of energy. For example, a plug-in hybrid electric vehicle's electric motor could source electricity from either a battery or from fossil fuels inputs via an internal combustion engine and a generator.", "title": "Types" }, { "paragraph_id": 51, "text": "The following are used in the assessment of the performance of an engine.", "title": "Performance" }, { "paragraph_id": 52, "text": "Speed refers to crankshaft rotation in piston engines and the speed of compressor/turbine rotors and electric motor rotors. It is measured in revolutions per minute (rpm).", "title": "Performance" }, { "paragraph_id": 53, "text": "Thrust is the force exerted on an airplane as a consequence of its propeller or jet engine accelerating the air passing through it. It is also the force exerted on a ship as a consequence of its propeller accelerating the water passing through it.", "title": "Performance" }, { "paragraph_id": 54, "text": "Torque is a turning moment on a shaft and is calculated by multiplying the force causing the moment by its distance from the shaft.", "title": "Performance" }, { "paragraph_id": 55, "text": "Power is the measure of how fast work is done.", "title": "Performance" }, { "paragraph_id": 56, "text": "Efficiency is a measure of how much fuel is wasted in producing power.", "title": "Performance" }, { "paragraph_id": 57, "text": "Vehicle noise is predominantly from the engine at low vehicle speeds and from tires and the air flowing past the vehicle at higher speeds. Electric motors are quieter than internal combustion engines. Thrust-producing engines, such as turbofans, turbojets and rockets emit the greatest amount of noise due to the way their thrust-producing, high-velocity exhaust streams interact with the surrounding stationary air. Noise reduction technology includes intake and exhaust system mufflers (silencers) on gasoline and diesel engines and noise attenuation liners in turbofan inlets.", "title": "Performance" }, { "paragraph_id": 58, "text": "Particularly notable kinds of engines include:", "title": "Engines by use" } ]
An engine or motor is a machine designed to convert one or more forms of energy into mechanical energy. Available energy sources include potential energy, heat energy, chemical energy, electric potential and nuclear energy. Many of these processes generate heat as an intermediate energy form, so heat engines have special importance. Some natural processes, such as atmospheric convection cells convert environmental heat into motion. Mechanical energy is of particular importance in transportation, but also plays a role in many industrial processes such as cutting, grinding, crushing, and mixing. Mechanical heat engines convert heat into work via various thermodynamic processes. The internal combustion engine is perhaps the most common example of a mechanical heat engine, in which heat from the combustion of a fuel causes rapid pressurisation of the gaseous combustion products in the combustion chamber, causing them to expand and drive a piston, which turns a crankshaft. Unlike internal combustion engines, a reaction engine produces thrust by expelling reaction mass, in accordance with Newton's third law of motion. Apart from heat engines, electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air, and clockwork motors in wind-up toys use elastic energy. In biological systems, molecular motors, like myosins in muscles, use chemical energy to create forces and ultimately motion. Chemical heat engines which employ air as a part of the fuel reaction are regarded as airbreathing engines. Chemical heat engines designed to operate outside of Earth's atmosphere need to carry an additional fuel component called the oxidizer; or the application needs to obtain heat by non-chemical means, such as by means of nuclear reactions.
2001-10-14T09:44:53Z
2023-12-05T12:25:47Z
[ "Template:Cite book", "Template:Cite journal", "Template:Machines", "Template:OED", "Template:Ordered list", "Template:Clarify", "Template:Div col end", "Template:Citation", "Template:Refbegin", "Template:Commons category", "Template:Wiktionary", "Template:Short description", "Template:Thermodynamic cycles", "Template:US patent", "Template:NOx", "Template:Cite Dictionary.com", "Template:Authority control", "Template:CO2", "Template:Webarchive", "Template:Cite encyclopedia", "Template:Citation needed", "Template:Div col", "Template:Reflist", "Template:ISBN", "Template:Lang", "Template:Wikt-lang", "Template:Cite web", "Template:Cite Merriam-Webster", "Template:Refend", "Template:Heat engines", "Template:Hatgrp", "Template:Main", "Template:Convert" ]
https://en.wikipedia.org/wiki/Engine
9,643
Economic and monetary union
An economic and monetary union (EMU) is a type of trade bloc that features a combination of a common market, customs union, and monetary union. Established via a trade pact, an EMU constitutes the sixth of seven stages in the process of economic integration. An EMU agreement usually combines a customs union with a common market. A typical EMU establishes free trade and a common external tariff throughout its jurisdiction. It is also designed to protect freedom in the movement of goods, services, and people. This arrangement is distinct from a monetary union (e.g., the Latin Monetary Union), which does not usually involve a common market. As with the economic and monetary union established among the 27 member states of the European Union (EU), an EMU may affect different parts of its jurisdiction in different ways. Some areas are subject to separate customs regulations from other areas subject to the EMU. These various arrangements may be established in a formal agreement, or they may exist on a de facto basis. For example, not all EU member states use the Euro established by its currency union, and not all EU member states are part of the Schengen Area. Some EU members participate in both unions, and some in neither. Territories of the United States, Australian External Territories and New Zealand territories each share a currency and, for the most part, the market of their respective mainland states. However, they are generally not part of the same customs territories. Several countries initially attempted to form an EMU at the Hague Summit in 1969. Afterward, a "draft plan" was announced. During this time, the main member presiding over this decision was Pierre Werner, Prime Minister of Luxembourg. The decision to form the Economic and Monetary Union of the European Union (EMU) was accepted in December 1991, which later became part of the Maastricht Treaty (the Treaty on European Union). The EMU involves four main activities. The first responsibility is to be in charge of implementing effective monetary policy for the euro area with price stability. There is a group of economists whose only role is studying how to improve the monetary policy while maintaining price stability. They conduct research, and their results are presented to the leaders of the EMU. Thereafter, the role of the leaders is to find a suitable way to implement the economists' work into their country's policies. Maintaining price stability is a long-term goal for all states in the EU, due to the effects it might have on the Euro as a currency. Secondly, the EMU must coordinate economic and fiscal policies in EU countries. They must find an equilibrium between the implementation of monetary and fiscal policies. They will advise countries to have greater coordination, even if that means having countries tightly coupled with looser monetary and tighter fiscal policy. Not coordinating the monetary market could result in risking an unpredictable situation. The EMU also deliberates on a mixed policy option, which has been shown to be beneficial in some empirical studies. Thirdly, the EMU ensures that the single market runs smoothly. The member countries respect the decisions made by the EMU and ensure that their actions will be in favor of a stable market. Finally, regulations of the EMU aid in supervising and monitoring financial institutions. There is an imperative need for all members of the EMU to act in unison. Therefore, the EMU has to have institutions supervising all the member states to protect the main aim of the EMU. The economic roles of nations within the EMU are to:
[ { "paragraph_id": 0, "text": "An economic and monetary union (EMU) is a type of trade bloc that features a combination of a common market, customs union, and monetary union. Established via a trade pact, an EMU constitutes the sixth of seven stages in the process of economic integration. An EMU agreement usually combines a customs union with a common market. A typical EMU establishes free trade and a common external tariff throughout its jurisdiction. It is also designed to protect freedom in the movement of goods, services, and people. This arrangement is distinct from a monetary union (e.g., the Latin Monetary Union), which does not usually involve a common market. As with the economic and monetary union established among the 27 member states of the European Union (EU), an EMU may affect different parts of its jurisdiction in different ways. Some areas are subject to separate customs regulations from other areas subject to the EMU. These various arrangements may be established in a formal agreement, or they may exist on a de facto basis. For example, not all EU member states use the Euro established by its currency union, and not all EU member states are part of the Schengen Area. Some EU members participate in both unions, and some in neither.", "title": "" }, { "paragraph_id": 1, "text": "Territories of the United States, Australian External Territories and New Zealand territories each share a currency and, for the most part, the market of their respective mainland states. However, they are generally not part of the same customs territories.", "title": "" }, { "paragraph_id": 2, "text": "Several countries initially attempted to form an EMU at the Hague Summit in 1969. Afterward, a \"draft plan\" was announced. During this time, the main member presiding over this decision was Pierre Werner, Prime Minister of Luxembourg. The decision to form the Economic and Monetary Union of the European Union (EMU) was accepted in December 1991, which later became part of the Maastricht Treaty (the Treaty on European Union).", "title": "History" }, { "paragraph_id": 3, "text": "The EMU involves four main activities.", "title": "Processes in the European EMU" }, { "paragraph_id": 4, "text": "The first responsibility is to be in charge of implementing effective monetary policy for the euro area with price stability. There is a group of economists whose only role is studying how to improve the monetary policy while maintaining price stability. They conduct research, and their results are presented to the leaders of the EMU. Thereafter, the role of the leaders is to find a suitable way to implement the economists' work into their country's policies. Maintaining price stability is a long-term goal for all states in the EU, due to the effects it might have on the Euro as a currency.", "title": "Processes in the European EMU" }, { "paragraph_id": 5, "text": "Secondly, the EMU must coordinate economic and fiscal policies in EU countries. They must find an equilibrium between the implementation of monetary and fiscal policies. They will advise countries to have greater coordination, even if that means having countries tightly coupled with looser monetary and tighter fiscal policy. Not coordinating the monetary market could result in risking an unpredictable situation. The EMU also deliberates on a mixed policy option, which has been shown to be beneficial in some empirical studies.", "title": "Processes in the European EMU" }, { "paragraph_id": 6, "text": "Thirdly, the EMU ensures that the single market runs smoothly. The member countries respect the decisions made by the EMU and ensure that their actions will be in favor of a stable market.", "title": "Processes in the European EMU" }, { "paragraph_id": 7, "text": "Finally, regulations of the EMU aid in supervising and monitoring financial institutions. There is an imperative need for all members of the EMU to act in unison. Therefore, the EMU has to have institutions supervising all the member states to protect the main aim of the EMU.", "title": "Processes in the European EMU" }, { "paragraph_id": 8, "text": "The economic roles of nations within the EMU are to:", "title": "Processes in the European EMU" } ]
An economic and monetary union (EMU) is a type of trade bloc that features a combination of a common market, customs union, and monetary union. Established via a trade pact, an EMU constitutes the sixth of seven stages in the process of economic integration. An EMU agreement usually combines a customs union with a common market. A typical EMU establishes free trade and a common external tariff throughout its jurisdiction. It is also designed to protect freedom in the movement of goods, services, and people. This arrangement is distinct from a monetary union, which does not usually involve a common market. As with the economic and monetary union established among the 27 member states of the European Union (EU), an EMU may affect different parts of its jurisdiction in different ways. Some areas are subject to separate customs regulations from other areas subject to the EMU. These various arrangements may be established in a formal agreement, or they may exist on a de facto basis. For example, not all EU member states use the Euro established by its currency union, and not all EU member states are part of the Schengen Area. Some EU members participate in both unions, and some in neither. Territories of the United States, Australian External Territories and New Zealand territories each share a currency and, for the most part, the market of their respective mainland states. However, they are generally not part of the same customs territories.
2023-01-07T02:40:27Z
[ "Template:Dead link", "Template:Trade bloc", "Template:Main", "Template:Citation needed", "Template:Reflist", "Template:Cite web", "Template:Cite journal", "Template:Short description", "Template:Nowrap", "Template:World economic integration", "Template:Cite encyclopedia", "Template:Economic integration" ]
https://en.wikipedia.org/wiki/Economic_and_monetary_union
9,644
European Environment Agency
The European Environment Agency (EEA) is the agency of the European Union (EU) which provides independent information on the environment. The European Environment Agency (EEA) is the agency of the European Union (EU) which provides independent information on the environment. Its goal is to help those involved in developing, implementing and evaluating environmental policy, and to inform the general public. The EEA was established by the European Economic Community (EEC) Regulation 1210/1990 (amended by EEC Regulation 933/1999 and EC Regulation 401/2009) and became operational in 1994, headquartered in Copenhagen, Denmark. The agency is governed by a management board composed of representatives of the governments of its 32 member states, a European Commission representative and two scientists appointed by the European Parliament, assisted by its Scientific Committee. The current Executive Director of the agency is Leena Ylä-Mononen, who has been appointed for a five-year term, starting on 1 June 2023. Ms Ylä-Mononen is the successor of professor Hans Bruyninckx. The member states of the European Union are members; however other states may become members of it by means of agreements concluded between them and the EU. It was the first EU body to open its membership to the 13 candidate countries (pre-2004 enlargement). The EEA has 32 member countries and six cooperating countries. The members are the 27 European Union member states together with Iceland, Liechtenstein, Norway, Switzerland and Turkey. Since Brexit in 2020, the UK is not a member of the EU anymore and therefore not a member state of the EEA. The six Western Balkan countries are cooperating countries: Albania, Bosnia and Herzegovina, Montenegro, North Macedonia, Serbia as well as Kosovo under the UN Security Council Resolution 1244/99. These cooperation activities are integrated into Eionet and are supported by the EU under the "Instrument for Pre-Accession Assistance". The EEA is an active member of the EPA Network. The European Environment Agency (EEA) produces assessments based on quality-assured data on a wide range of issues from biodiversity, air quality, transport to climate change. These assessments are closely linked to the European Union's environment policies and legislation and help monitor progress in some areas and indicate areas where additional efforts are needed. As required in its founding regulation, the EEA publishes its flagship report the State and Outlook of Europe's environment (SOER), which is an integrated assessment, analysing trends, progress to targets as well as outlook for the mid- to long-term. The agency publishes annually a report on Europe's most polluted provinces for air quality, detailing fine particulate matter PM 2.5. The EEA shares this information, including the datasets used in its assessments, through its main website and a number of thematic information platforms such as Biodiversity Information System for Europe (BISE), Water Information System for Europe (WISE) and ClimateADAPT. The Climate-ADAPT knowledge platform presents information and data on expected climatic changes, the vulnerability of regions and sectors, adaptation case studies, and adaptation options, adaptation planning tools, and EU policy. The European Nature Information System (EUNIS) provides access to the publicly available data in the EUNIS database for species, habitat types and protected sites across Europe. It is part of the European Biodiversity data centre (BDC), and is maintained by the EEA. The database contains data The European Environment Information and Observation Network (Eionet) is a collaboration network between EEA member countries and non-member, cooperating nations. Cooperation is facilitated through different national environmental agencies, ministries, or offices. Eionet encourages the sharing of data and highlights specific topics for discussion and cooperation among participating countries. Eionet currently includes covers seven European Topic Centres (ETCs): The European Environment Agency (EEA) implements the "Shared Environmental Information System" principles and best practices via projects such as the "ENI SEIS II EAST PROJECT" & the "ENI SEIS II SOUTH PROJECT" to support environmental protection within the six eastern partnership countries (ENP) & to contribute to the reduction in marine pollution in the Mediterranean through the shared availability and access to relevant environmental information. As for every EU body and institution, the EEA's budget is subject to a discharge process, consisting of external examination of its budget execution and financial management, to ensure sound financial management of its budget. Since its establishment, the EEA has been granted discharge for its budget without exception. The EEA provides full access to its administrative and budgetary documents in its public documents register. The discharge process for the 2010 budget required additional clarifications. In February 2012, the European Parliament's Committee on Budgetary Control published a draft report, identifying areas of concern in the use of funds and its influence for the 2010 budget such as a 26% budget increase from 2009 to 2010 to €50 600 000. and questioned that maximum competition and value-for-money principles were honored in hiring, also possible fictitious employees. The EEA's Executive Director refuted allegations of irregularities in a public hearing. On 27 March 2012 Members of the European Parliament (MEPs) voted on the report and commended the cooperation between the Agency and NGOs working in the environmental area. On 23 October 2012, the European Parliament voted and granted the discharge to the European Environment Agency for its 2010 budget. In addition to its 32 members and six Balkan cooperating countries, the EEA also cooperates and fosters partnerships with its neighbours and other countries and regions, mostly in the context of the European Neighbourhood Policy: Additionally the EEA cooperates with multiple international organizations and the corresponding agencies of the following countries: The 26 official languages used by the EEA are: Bulgarian, Czech, Croatian, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Hungarian, Icelandic, Italian, Lithuanian, Latvian, Malti, Dutch, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovene, Swedish and Turkish.
[ { "paragraph_id": 0, "text": "The European Environment Agency (EEA) is the agency of the European Union (EU) which provides independent information on the environment.", "title": "" }, { "paragraph_id": 1, "text": "The European Environment Agency (EEA) is the agency of the European Union (EU) which provides independent information on the environment. Its goal is to help those involved in developing, implementing and evaluating environmental policy, and to inform the general public.", "title": "Definition" }, { "paragraph_id": 2, "text": "The EEA was established by the European Economic Community (EEC) Regulation 1210/1990 (amended by EEC Regulation 933/1999 and EC Regulation 401/2009) and became operational in 1994, headquartered in Copenhagen, Denmark.", "title": "Organization" }, { "paragraph_id": 3, "text": "The agency is governed by a management board composed of representatives of the governments of its 32 member states, a European Commission representative and two scientists appointed by the European Parliament, assisted by its Scientific Committee.", "title": "Organization" }, { "paragraph_id": 4, "text": "The current Executive Director of the agency is Leena Ylä-Mononen, who has been appointed for a five-year term, starting on 1 June 2023. Ms Ylä-Mononen is the successor of professor Hans Bruyninckx.", "title": "Organization" }, { "paragraph_id": 5, "text": "The member states of the European Union are members; however other states may become members of it by means of agreements concluded between them and the EU.", "title": "Member countries" }, { "paragraph_id": 6, "text": "It was the first EU body to open its membership to the 13 candidate countries (pre-2004 enlargement).", "title": "Member countries" }, { "paragraph_id": 7, "text": "The EEA has 32 member countries and six cooperating countries. The members are the 27 European Union member states together with Iceland, Liechtenstein, Norway, Switzerland and Turkey.", "title": "Member countries" }, { "paragraph_id": 8, "text": "Since Brexit in 2020, the UK is not a member of the EU anymore and therefore not a member state of the EEA.", "title": "Member countries" }, { "paragraph_id": 9, "text": "The six Western Balkan countries are cooperating countries: Albania, Bosnia and Herzegovina, Montenegro, North Macedonia, Serbia as well as Kosovo under the UN Security Council Resolution 1244/99. These cooperation activities are integrated into Eionet and are supported by the EU under the \"Instrument for Pre-Accession Assistance\".", "title": "Member countries" }, { "paragraph_id": 10, "text": "The EEA is an active member of the EPA Network.", "title": "Member countries" }, { "paragraph_id": 11, "text": "The European Environment Agency (EEA) produces assessments based on quality-assured data on a wide range of issues from biodiversity, air quality, transport to climate change. These assessments are closely linked to the European Union's environment policies and legislation and help monitor progress in some areas and indicate areas where additional efforts are needed.", "title": "Reports, data and knowledge" }, { "paragraph_id": 12, "text": "As required in its founding regulation, the EEA publishes its flagship report the State and Outlook of Europe's environment (SOER), which is an integrated assessment, analysing trends, progress to targets as well as outlook for the mid- to long-term. The agency publishes annually a report on Europe's most polluted provinces for air quality, detailing fine particulate matter PM 2.5.", "title": "Reports, data and knowledge" }, { "paragraph_id": 13, "text": "The EEA shares this information, including the datasets used in its assessments, through its main website and a number of thematic information platforms such as Biodiversity Information System for Europe (BISE), Water Information System for Europe (WISE) and ClimateADAPT. The Climate-ADAPT knowledge platform presents information and data on expected climatic changes, the vulnerability of regions and sectors, adaptation case studies, and adaptation options, adaptation planning tools, and EU policy.", "title": "Reports, data and knowledge" }, { "paragraph_id": 14, "text": "The European Nature Information System (EUNIS) provides access to the publicly available data in the EUNIS database for species, habitat types and protected sites across Europe. It is part of the European Biodiversity data centre (BDC), and is maintained by the EEA.", "title": "Reports, data and knowledge" }, { "paragraph_id": 15, "text": "The database contains data", "title": "Reports, data and knowledge" }, { "paragraph_id": 16, "text": "The European Environment Information and Observation Network (Eionet) is a collaboration network between EEA member countries and non-member, cooperating nations. Cooperation is facilitated through different national environmental agencies, ministries, or offices. Eionet encourages the sharing of data and highlights specific topics for discussion and cooperation among participating countries.", "title": "European environment information and observation network" }, { "paragraph_id": 17, "text": "Eionet currently includes covers seven European Topic Centres (ETCs):", "title": "European environment information and observation network" }, { "paragraph_id": 18, "text": "The European Environment Agency (EEA) implements the \"Shared Environmental Information System\" principles and best practices via projects such as the \"ENI SEIS II EAST PROJECT\" & the \"ENI SEIS II SOUTH PROJECT\" to support environmental protection within the six eastern partnership countries (ENP) & to contribute to the reduction in marine pollution in the Mediterranean through the shared availability and access to relevant environmental information.", "title": "European environment information and observation network" }, { "paragraph_id": 19, "text": "As for every EU body and institution, the EEA's budget is subject to a discharge process, consisting of external examination of its budget execution and financial management, to ensure sound financial management of its budget. Since its establishment, the EEA has been granted discharge for its budget without exception. The EEA provides full access to its administrative and budgetary documents in its public documents register.", "title": "Budget management and discharge" }, { "paragraph_id": 20, "text": "The discharge process for the 2010 budget required additional clarifications. In February 2012, the European Parliament's Committee on Budgetary Control published a draft report, identifying areas of concern in the use of funds and its influence for the 2010 budget such as a 26% budget increase from 2009 to 2010 to €50 600 000. and questioned that maximum competition and value-for-money principles were honored in hiring, also possible fictitious employees.", "title": "Budget management and discharge" }, { "paragraph_id": 21, "text": "The EEA's Executive Director refuted allegations of irregularities in a public hearing. On 27 March 2012 Members of the European Parliament (MEPs) voted on the report and commended the cooperation between the Agency and NGOs working in the environmental area. On 23 October 2012, the European Parliament voted and granted the discharge to the European Environment Agency for its 2010 budget.", "title": "Budget management and discharge" }, { "paragraph_id": 22, "text": "In addition to its 32 members and six Balkan cooperating countries, the EEA also cooperates and fosters partnerships with its neighbours and other countries and regions, mostly in the context of the European Neighbourhood Policy:", "title": "International cooperation" }, { "paragraph_id": 23, "text": "Additionally the EEA cooperates with multiple international organizations and the corresponding agencies of the following countries:", "title": "International cooperation" }, { "paragraph_id": 24, "text": "The 26 official languages used by the EEA are: Bulgarian, Czech, Croatian, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Hungarian, Icelandic, Italian, Lithuanian, Latvian, Malti, Dutch, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovene, Swedish and Turkish.", "title": "Official languages" } ]
The European Environment Agency (EEA) is the agency of the European Union (EU) which provides independent information on the environment.
2001-08-04T12:22:43Z
2023-12-15T00:11:37Z
[ "Template:Citation", "Template:Authority control", "Template:Flag", "Template:Rp", "Template:Citation needed", "Template:Reflist", "Template:Agencies of the European Union", "Template:Multiple issues", "Template:Flagicon", "Template:Webarchive", "Template:Short description", "Template:Use dmy dates", "Template:Infobox government agency", "Template:Cite web", "Template:Official website" ]
https://en.wikipedia.org/wiki/European_Environment_Agency
9,645
EV
Ev or EV may refer to:
[ { "paragraph_id": 0, "text": "Ev or EV may refer to:", "title": "" } ]
Ev or EV may refer to:
2023-07-02T09:36:26Z
[ "Template:Tocright", "Template:Lang", "Template:Canned search", "Template:Lookfrom", "Template:Intitle", "Template:Disambiguation", "Template:Wiktionary" ]
https://en.wikipedia.org/wiki/EV
9,646
Erlang (programming language)
Erlang (/ˈɜːrlæŋ/ UR-lang) is a general-purpose, concurrent, functional high-level programming language, and a garbage-collected runtime system. The term Erlang is used interchangeably with Erlang/OTP, or Open Telecom Platform (OTP), which consists of the Erlang runtime system, several ready-to-use components (OTP) mainly written in Erlang, and a set of design principles for Erlang programs. The Erlang runtime system is designed for systems with these traits: The Erlang programming language has immutable data, pattern matching, and functional programming. The sequential subset of the Erlang language supports eager evaluation, single assignment, and dynamic typing. A normal Erlang application is built out of hundreds of small Erlang processes. It was originally proprietary software within Ericsson, developed by Joe Armstrong, Robert Virding, and Mike Williams in 1986, but was released as free and open-source software in 1998. Erlang/OTP is supported and maintained by the Open Telecom Platform (OTP) product unit at Ericsson. The name Erlang, attributed to Bjarne Däcker, has been presumed by those working on the telephony switches (for whom the language was designed) to be a reference to Danish mathematician and engineer Agner Krarup Erlang and a syllabic abbreviation of "Ericsson Language". Erlang was designed with the aim of improving the development of telephony applications. The initial version of Erlang was implemented in Prolog and was influenced by the programming language PLEX used in earlier Ericsson exchanges. By 1988 Erlang had proven that it was suitable for prototyping telephone exchanges, but the Prolog interpreter was far too slow. One group within Ericsson estimated that it would need to be 40 times faster to be suitable for production use. In 1992, work began on the BEAM virtual machine (VM) which compiles Erlang to C using a mix of natively compiled code and threaded code to strike a balance between performance and disk space. According to co-inventor Joe Armstrong, the language went from lab product to real applications following the collapse of the next-generation AXE telephone exchange named AXE-N in 1995. As a result, Erlang was chosen for the next Asynchronous Transfer Mode (ATM) exchange AXD. In February 1998, Ericsson Radio Systems banned the in-house use of Erlang for new products, citing a preference for non-proprietary languages. The ban caused Armstrong and others to make plans to leave Ericsson. In March 1998 Ericsson announced the AXD301 switch, containing over a million lines of Erlang and reported to achieve a high availability of nine "9"s. In December 1998, the implementation of Erlang was open-sourced and most of the Erlang team resigned to form a new company Bluetail AB. Ericsson eventually relaxed the ban and re-hired Armstrong in 2004. In 2006, native symmetric multiprocessing support was added to the runtime system and VM. Erlang applications are built of very lightweight Erlang processes in the Erlang runtime system. Erlang processes can be seen as "living" objects (object-oriented programming), with data encapsulation and message passing, but capable of changing behavior during runtime. The Erlang runtime system provides strict process isolation between Erlang processes (this includes data and garbage collection, separated individually by each Erlang process) and transparent communication between processes (see Location transparency) on different Erlang nodes (on different hosts). Joe Armstrong, co-inventor of Erlang, summarized the principles of processes in his PhD thesis: Joe Armstrong remarked in an interview with Rackspace in 2013: "If Java is 'write once, run anywhere', then Erlang is 'write once, run forever'." In 2014, Ericsson reported Erlang was being used in its support nodes, and in GPRS, 3G and LTE mobile networks worldwide and also by Nortel and T-Mobile. Erlang is used in RabbitMQ. As Tim Bray, director of Web Technologies at Sun Microsystems, expressed in his keynote at O'Reilly Open Source Convention (OSCON) in July 2008: If somebody came to me and wanted to pay me a lot of money to build a large scale message handling system that really had to be up all the time, could never afford to go down for years at a time, I would unhesitatingly choose Erlang to build it in. Erlang is the programming language used to code WhatsApp. Elixir is a programming language that compiles into BEAM byte code (via Erlang Abstract Format). Since being released as open source, Erlang has been spreading beyond telecoms, establishing itself in other vertical markets such as FinTech, gaming, healthcare, automotive, internet of things and blockchain. Apart from WhatsApp, there are other companies listed as Erlang's success stories: Vocalink (a MasterCard company), Goldman Sachs, Nintendo, AdRoll, Grindr, BT Mobile, Samsung, OpenX, and SITA. A factorial algorithm implemented in Erlang: A tail recursive algorithm that produces the Fibonacci sequence: Here's the same program without the explanatory comments: Quicksort in Erlang, using list comprehension: The above example recursively invokes the function qsort until nothing remains to be sorted. The expression [Front || Front <- Rest, Front < Pivot] is a list comprehension, meaning "Construct a list of elements Front such that Front is a member of Rest, and Front is less than Pivot." ++ is the list concatenation operator. A comparison function can be used for more complicated structures for the sake of readability. The following code would sort lists according to length: A Pivot is taken from the first parameter given to qsort() and the rest of Lists is named Rest. Note that the expression is no different in form from (in the previous example) except for the use of a comparison function in the last part, saying "Construct a list of elements X such that X is a member of Rest, and Smaller is true", with Smaller being defined earlier as The anonymous function is named Smaller in the parameter list of the second definition of qsort so that it can be referenced by that name within that function. It is not named in the first definition of qsort, which deals with the base case of an empty list and thus has no need of this function, let alone a name for it. Erlang has eight primitive data types: And three compound data types: Two forms of syntactic sugar are provided: Erlang has no method to define classes, although there are external libraries available. Erlang is designed with a mechanism that makes it easy for external processes to monitor for crashes (or hardware failures), rather than an in-process mechanism like exception handling used in many other programming languages. Crashes are reported like other messages, which is the only way processes can communicate with each other, and subprocesses can be spawned cheaply (see below). The "let it crash" philosophy prefers that a process be completely restarted rather than trying to recover from a serious failure. Though it still requires handling of errors, this philosophy results in less code devoted to defensive programming where error-handling code is highly contextual and specific. A typical Erlang application is written in the form of a supervisor tree. This architecture is based on a hierarchy of processes in which the top level process is known as a "supervisor". The supervisor then spawns multiple child processes that act either as workers or more, lower level supervisors. Such hierarchies can exist to arbitrary depths and have proven to provide a highly scalable and fault-tolerant environment within which application functionality can be implemented. Within a supervisor tree, all supervisor processes are responsible for managing the lifecycle of their child processes, and this includes handling situations in which those child processes crash. Any process can become a supervisor by first spawning a child process, then calling erlang:monitor/2 on that process. If the monitored process then crashes, the supervisor will receive a message containing a tuple whose first member is the atom 'DOWN'. The supervisor is responsible firstly for listening for such messages and secondly, for taking the appropriate action to correct the error condition. Erlang's main strength is support for concurrency. It has a small but powerful set of primitives to create processes and communicate among them. Erlang is conceptually similar to the language occam, though it recasts the ideas of communicating sequential processes (CSP) in a functional framework and uses asynchronous message passing. Processes are the primary means to structure an Erlang application. They are neither operating system processes nor threads, but lightweight processes that are scheduled by BEAM. Like operating system processes (but unlike operating system threads), they share no state with each other. The estimated minimal overhead for each is 300 words. Thus, many processes can be created without degrading performance. In 2005, a benchmark with 20 million processes was successfully performed with 64-bit Erlang on a machine with 16 GB random-access memory (RAM; total 800 bytes/process). Erlang has supported symmetric multiprocessing since release R11B of May 2006. While threads need external library support in most languages, Erlang provides language-level features to create and manage processes with the goal of simplifying concurrent programming. Though all concurrency is explicit in Erlang, processes communicate using message passing instead of shared variables, which removes the need for explicit locks (a locking scheme is still used internally by the VM). Inter-process communication works via a shared-nothing asynchronous message passing system: every process has a "mailbox", a queue of messages that have been sent by other processes and not yet consumed. A process uses the receive primitive to retrieve messages that match desired patterns. A message-handling routine tests messages in turn against each pattern, until one of them matches. When the message is consumed and removed from the mailbox the process resumes execution. A message may comprise any Erlang structure, including primitives (integers, floats, characters, atoms), tuples, lists, and functions. The code example below shows the built-in support for distributed processes: As the example shows, processes may be created on remote nodes, and communication with them is transparent in the sense that communication with remote processes works exactly as communication with local processes. Concurrency supports the primary method of error-handling in Erlang. When a process crashes, it neatly exits and sends a message to the controlling process which can then take action, such as starting a new process that takes over the old process's task. The official reference implementation of Erlang uses BEAM. BEAM is included in the official distribution of Erlang, called Erlang/OTP. BEAM executes bytecode which is converted to threaded code at load time. It also includes a native code compiler on most platforms, developed by the High Performance Erlang Project (HiPE) at Uppsala University. Since October 2001 the HiPE system is fully integrated in Ericsson's Open Source Erlang/OTP system. It also supports interpreting, directly from source code via abstract syntax tree, via script as of R11B-5 release of Erlang. Erlang supports language-level Dynamic Software Updating. To implement this, code is loaded and managed as "module" units; the module is a compilation unit. The system can keep two versions of a module in memory at the same time, and processes can concurrently run code from each. The versions are referred to as the "new" and the "old" version. A process will not move into the new version until it makes an external call to its module. An example of the mechanism of hot code loading: For the second version, we add the possibility to reset the count to zero. Only when receiving a message consisting of the atom code_switch will the loop execute an external call to codeswitch/1 (?MODULE is a preprocessor macro for the current module). If there is a new version of the counter module in memory, then its codeswitch/1 function will be called. The practice of having a specific entry-point into a new version allows the programmer to transform state to what is needed in the newer version. In the example, the state is kept as an integer. In practice, systems are built up using design principles from the Open Telecom Platform, which leads to more code upgradable designs. Successful hot code loading is exacting. Code must be written with care to make use of Erlang's facilities. In 1998, Ericsson released Erlang as free and open-source software to ensure its independence from a single vendor and to increase awareness of the language. Erlang, together with libraries and the real-time distributed database Mnesia, forms the OTP collection of libraries. Ericsson and a few other companies support Erlang commercially. Since the open source release, Erlang has been used by several firms worldwide, including Nortel and T-Mobile. Although Erlang was designed to fill a niche and has remained an obscure language for most of its existence, its popularity is growing due to demand for concurrent services. Erlang has found some use in fielding massively multiplayer online role-playing game (MMORPG) servers.
[ { "paragraph_id": 0, "text": "Erlang (/ˈɜːrlæŋ/ UR-lang) is a general-purpose, concurrent, functional high-level programming language, and a garbage-collected runtime system. The term Erlang is used interchangeably with Erlang/OTP, or Open Telecom Platform (OTP), which consists of the Erlang runtime system, several ready-to-use components (OTP) mainly written in Erlang, and a set of design principles for Erlang programs.", "title": "" }, { "paragraph_id": 1, "text": "The Erlang runtime system is designed for systems with these traits:", "title": "" }, { "paragraph_id": 2, "text": "The Erlang programming language has immutable data, pattern matching, and functional programming. The sequential subset of the Erlang language supports eager evaluation, single assignment, and dynamic typing.", "title": "" }, { "paragraph_id": 3, "text": "A normal Erlang application is built out of hundreds of small Erlang processes.", "title": "" }, { "paragraph_id": 4, "text": "It was originally proprietary software within Ericsson, developed by Joe Armstrong, Robert Virding, and Mike Williams in 1986, but was released as free and open-source software in 1998. Erlang/OTP is supported and maintained by the Open Telecom Platform (OTP) product unit at Ericsson.", "title": "" }, { "paragraph_id": 5, "text": "The name Erlang, attributed to Bjarne Däcker, has been presumed by those working on the telephony switches (for whom the language was designed) to be a reference to Danish mathematician and engineer Agner Krarup Erlang and a syllabic abbreviation of \"Ericsson Language\". Erlang was designed with the aim of improving the development of telephony applications. The initial version of Erlang was implemented in Prolog and was influenced by the programming language PLEX used in earlier Ericsson exchanges. By 1988 Erlang had proven that it was suitable for prototyping telephone exchanges, but the Prolog interpreter was far too slow. One group within Ericsson estimated that it would need to be 40 times faster to be suitable for production use. In 1992, work began on the BEAM virtual machine (VM) which compiles Erlang to C using a mix of natively compiled code and threaded code to strike a balance between performance and disk space. According to co-inventor Joe Armstrong, the language went from lab product to real applications following the collapse of the next-generation AXE telephone exchange named AXE-N in 1995. As a result, Erlang was chosen for the next Asynchronous Transfer Mode (ATM) exchange AXD.", "title": "History" }, { "paragraph_id": 6, "text": "In February 1998, Ericsson Radio Systems banned the in-house use of Erlang for new products, citing a preference for non-proprietary languages. The ban caused Armstrong and others to make plans to leave Ericsson. In March 1998 Ericsson announced the AXD301 switch, containing over a million lines of Erlang and reported to achieve a high availability of nine \"9\"s. In December 1998, the implementation of Erlang was open-sourced and most of the Erlang team resigned to form a new company Bluetail AB. Ericsson eventually relaxed the ban and re-hired Armstrong in 2004.", "title": "History" }, { "paragraph_id": 7, "text": "In 2006, native symmetric multiprocessing support was added to the runtime system and VM.", "title": "History" }, { "paragraph_id": 8, "text": "Erlang applications are built of very lightweight Erlang processes in the Erlang runtime system. Erlang processes can be seen as \"living\" objects (object-oriented programming), with data encapsulation and message passing, but capable of changing behavior during runtime. The Erlang runtime system provides strict process isolation between Erlang processes (this includes data and garbage collection, separated individually by each Erlang process) and transparent communication between processes (see Location transparency) on different Erlang nodes (on different hosts).", "title": "History" }, { "paragraph_id": 9, "text": "Joe Armstrong, co-inventor of Erlang, summarized the principles of processes in his PhD thesis:", "title": "History" }, { "paragraph_id": 10, "text": "Joe Armstrong remarked in an interview with Rackspace in 2013: \"If Java is 'write once, run anywhere', then Erlang is 'write once, run forever'.\"", "title": "History" }, { "paragraph_id": 11, "text": "In 2014, Ericsson reported Erlang was being used in its support nodes, and in GPRS, 3G and LTE mobile networks worldwide and also by Nortel and T-Mobile.", "title": "History" }, { "paragraph_id": 12, "text": "Erlang is used in RabbitMQ. As Tim Bray, director of Web Technologies at Sun Microsystems, expressed in his keynote at O'Reilly Open Source Convention (OSCON) in July 2008:", "title": "History" }, { "paragraph_id": 13, "text": "If somebody came to me and wanted to pay me a lot of money to build a large scale message handling system that really had to be up all the time, could never afford to go down for years at a time, I would unhesitatingly choose Erlang to build it in.", "title": "History" }, { "paragraph_id": 14, "text": "Erlang is the programming language used to code WhatsApp.", "title": "History" }, { "paragraph_id": 15, "text": "Elixir is a programming language that compiles into BEAM byte code (via Erlang Abstract Format).", "title": "History" }, { "paragraph_id": 16, "text": "Since being released as open source, Erlang has been spreading beyond telecoms, establishing itself in other vertical markets such as FinTech, gaming, healthcare, automotive, internet of things and blockchain. Apart from WhatsApp, there are other companies listed as Erlang's success stories: Vocalink (a MasterCard company), Goldman Sachs, Nintendo, AdRoll, Grindr, BT Mobile, Samsung, OpenX, and SITA.", "title": "History" }, { "paragraph_id": 17, "text": "A factorial algorithm implemented in Erlang:", "title": "Functional programming examples" }, { "paragraph_id": 18, "text": "A tail recursive algorithm that produces the Fibonacci sequence:", "title": "Functional programming examples" }, { "paragraph_id": 19, "text": "Here's the same program without the explanatory comments:", "title": "Functional programming examples" }, { "paragraph_id": 20, "text": "Quicksort in Erlang, using list comprehension:", "title": "Functional programming examples" }, { "paragraph_id": 21, "text": "The above example recursively invokes the function qsort until nothing remains to be sorted. The expression [Front || Front <- Rest, Front < Pivot] is a list comprehension, meaning \"Construct a list of elements Front such that Front is a member of Rest, and Front is less than Pivot.\" ++ is the list concatenation operator.", "title": "Functional programming examples" }, { "paragraph_id": 22, "text": "A comparison function can be used for more complicated structures for the sake of readability.", "title": "Functional programming examples" }, { "paragraph_id": 23, "text": "The following code would sort lists according to length:", "title": "Functional programming examples" }, { "paragraph_id": 24, "text": "A Pivot is taken from the first parameter given to qsort() and the rest of Lists is named Rest. Note that the expression", "title": "Functional programming examples" }, { "paragraph_id": 25, "text": "is no different in form from", "title": "Functional programming examples" }, { "paragraph_id": 26, "text": "(in the previous example) except for the use of a comparison function in the last part, saying \"Construct a list of elements X such that X is a member of Rest, and Smaller is true\", with Smaller being defined earlier as", "title": "Functional programming examples" }, { "paragraph_id": 27, "text": "The anonymous function is named Smaller in the parameter list of the second definition of qsort so that it can be referenced by that name within that function. It is not named in the first definition of qsort, which deals with the base case of an empty list and thus has no need of this function, let alone a name for it.", "title": "Functional programming examples" }, { "paragraph_id": 28, "text": "Erlang has eight primitive data types:", "title": "Data types" }, { "paragraph_id": 29, "text": "And three compound data types:", "title": "Data types" }, { "paragraph_id": 30, "text": "Two forms of syntactic sugar are provided:", "title": "Data types" }, { "paragraph_id": 31, "text": "Erlang has no method to define classes, although there are external libraries available.", "title": "Data types" }, { "paragraph_id": 32, "text": "Erlang is designed with a mechanism that makes it easy for external processes to monitor for crashes (or hardware failures), rather than an in-process mechanism like exception handling used in many other programming languages. Crashes are reported like other messages, which is the only way processes can communicate with each other, and subprocesses can be spawned cheaply (see below). The \"let it crash\" philosophy prefers that a process be completely restarted rather than trying to recover from a serious failure. Though it still requires handling of errors, this philosophy results in less code devoted to defensive programming where error-handling code is highly contextual and specific.", "title": "\"Let it crash\" coding style" }, { "paragraph_id": 33, "text": "A typical Erlang application is written in the form of a supervisor tree. This architecture is based on a hierarchy of processes in which the top level process is known as a \"supervisor\". The supervisor then spawns multiple child processes that act either as workers or more, lower level supervisors. Such hierarchies can exist to arbitrary depths and have proven to provide a highly scalable and fault-tolerant environment within which application functionality can be implemented.", "title": "\"Let it crash\" coding style" }, { "paragraph_id": 34, "text": "Within a supervisor tree, all supervisor processes are responsible for managing the lifecycle of their child processes, and this includes handling situations in which those child processes crash. Any process can become a supervisor by first spawning a child process, then calling erlang:monitor/2 on that process. If the monitored process then crashes, the supervisor will receive a message containing a tuple whose first member is the atom 'DOWN'. The supervisor is responsible firstly for listening for such messages and secondly, for taking the appropriate action to correct the error condition.", "title": "\"Let it crash\" coding style" }, { "paragraph_id": 35, "text": "Erlang's main strength is support for concurrency. It has a small but powerful set of primitives to create processes and communicate among them. Erlang is conceptually similar to the language occam, though it recasts the ideas of communicating sequential processes (CSP) in a functional framework and uses asynchronous message passing. Processes are the primary means to structure an Erlang application. They are neither operating system processes nor threads, but lightweight processes that are scheduled by BEAM. Like operating system processes (but unlike operating system threads), they share no state with each other. The estimated minimal overhead for each is 300 words. Thus, many processes can be created without degrading performance. In 2005, a benchmark with 20 million processes was successfully performed with 64-bit Erlang on a machine with 16 GB random-access memory (RAM; total 800 bytes/process). Erlang has supported symmetric multiprocessing since release R11B of May 2006.", "title": "Concurrency and distribution orientation" }, { "paragraph_id": 36, "text": "While threads need external library support in most languages, Erlang provides language-level features to create and manage processes with the goal of simplifying concurrent programming. Though all concurrency is explicit in Erlang, processes communicate using message passing instead of shared variables, which removes the need for explicit locks (a locking scheme is still used internally by the VM).", "title": "Concurrency and distribution orientation" }, { "paragraph_id": 37, "text": "Inter-process communication works via a shared-nothing asynchronous message passing system: every process has a \"mailbox\", a queue of messages that have been sent by other processes and not yet consumed. A process uses the receive primitive to retrieve messages that match desired patterns. A message-handling routine tests messages in turn against each pattern, until one of them matches. When the message is consumed and removed from the mailbox the process resumes execution. A message may comprise any Erlang structure, including primitives (integers, floats, characters, atoms), tuples, lists, and functions.", "title": "Concurrency and distribution orientation" }, { "paragraph_id": 38, "text": "The code example below shows the built-in support for distributed processes:", "title": "Concurrency and distribution orientation" }, { "paragraph_id": 39, "text": "As the example shows, processes may be created on remote nodes, and communication with them is transparent in the sense that communication with remote processes works exactly as communication with local processes.", "title": "Concurrency and distribution orientation" }, { "paragraph_id": 40, "text": "Concurrency supports the primary method of error-handling in Erlang. When a process crashes, it neatly exits and sends a message to the controlling process which can then take action, such as starting a new process that takes over the old process's task.", "title": "Concurrency and distribution orientation" }, { "paragraph_id": 41, "text": "The official reference implementation of Erlang uses BEAM. BEAM is included in the official distribution of Erlang, called Erlang/OTP. BEAM executes bytecode which is converted to threaded code at load time. It also includes a native code compiler on most platforms, developed by the High Performance Erlang Project (HiPE) at Uppsala University. Since October 2001 the HiPE system is fully integrated in Ericsson's Open Source Erlang/OTP system. It also supports interpreting, directly from source code via abstract syntax tree, via script as of R11B-5 release of Erlang.", "title": "Implementation" }, { "paragraph_id": 42, "text": "Erlang supports language-level Dynamic Software Updating. To implement this, code is loaded and managed as \"module\" units; the module is a compilation unit. The system can keep two versions of a module in memory at the same time, and processes can concurrently run code from each. The versions are referred to as the \"new\" and the \"old\" version. A process will not move into the new version until it makes an external call to its module.", "title": "Hot code loading and modules" }, { "paragraph_id": 43, "text": "An example of the mechanism of hot code loading:", "title": "Hot code loading and modules" }, { "paragraph_id": 44, "text": "For the second version, we add the possibility to reset the count to zero.", "title": "Hot code loading and modules" }, { "paragraph_id": 45, "text": "Only when receiving a message consisting of the atom code_switch will the loop execute an external call to codeswitch/1 (?MODULE is a preprocessor macro for the current module). If there is a new version of the counter module in memory, then its codeswitch/1 function will be called. The practice of having a specific entry-point into a new version allows the programmer to transform state to what is needed in the newer version. In the example, the state is kept as an integer.", "title": "Hot code loading and modules" }, { "paragraph_id": 46, "text": "In practice, systems are built up using design principles from the Open Telecom Platform, which leads to more code upgradable designs. Successful hot code loading is exacting. Code must be written with care to make use of Erlang's facilities.", "title": "Hot code loading and modules" }, { "paragraph_id": 47, "text": "In 1998, Ericsson released Erlang as free and open-source software to ensure its independence from a single vendor and to increase awareness of the language. Erlang, together with libraries and the real-time distributed database Mnesia, forms the OTP collection of libraries. Ericsson and a few other companies support Erlang commercially.", "title": "Distribution" }, { "paragraph_id": 48, "text": "Since the open source release, Erlang has been used by several firms worldwide, including Nortel and T-Mobile. Although Erlang was designed to fill a niche and has remained an obscure language for most of its existence, its popularity is growing due to demand for concurrent services. Erlang has found some use in fielding massively multiplayer online role-playing game (MMORPG) servers.", "title": "Distribution" } ]
Erlang is a general-purpose, concurrent, functional high-level programming language, and a garbage-collected runtime system. The term Erlang is used interchangeably with Erlang/OTP, or Open Telecom Platform (OTP), which consists of the Erlang runtime system, several ready-to-use components (OTP) mainly written in Erlang, and a set of design principles for Erlang programs. The Erlang runtime system is designed for systems with these traits: Distributed Fault-tolerant Soft real-time Highly available, non-stop applications Hot swapping, where code can be changed without stopping a system. The Erlang programming language has immutable data, pattern matching, and functional programming. The sequential subset of the Erlang language supports eager evaluation, single assignment, and dynamic typing. A normal Erlang application is built out of hundreds of small Erlang processes. It was originally proprietary software within Ericsson, developed by Joe Armstrong, Robert Virding, and Mike Williams in 1986, but was released as free and open-source software in 1998. Erlang/OTP is supported and maintained by the Open Telecom Platform (OTP) product unit at Ericsson.
2001-08-04T23:09:15Z
2023-11-29T10:15:59Z
[ "Template:Snd", "Template:Reflist", "Template:Cite AV media", "Template:Cite book", "Template:Refend", "Template:Short description", "Template:Infobox programming language", "Template:Cbignore", "Template:Programming languages", "Template:Authority control", "Template:Blockquote", "Template:Cite journal", "Template:Webarchive", "Template:Wikibooks", "Template:Official website", "Template:Use dmy dates", "Template:Cite web", "Template:Cite conference", "Template:Cite thesis", "Template:Refbegin", "Template:Commons category", "Template:IPAc-en", "Template:Respell" ]
https://en.wikipedia.org/wiki/Erlang_(programming_language)
9,647
Euphoria (programming language)
Euphoria is a programming language created by Robert Craig of Rapid Deployment Software in Toronto, Ontario, Canada. Initially developed (though not publicly released) on the Atari ST, the first commercial release was for MS-DOS as proprietary software. In 2006, with the release of version 3, Euphoria became open-source software. The openEuphoria Group continues to administer and develop the project. In December 2010, the openEuphoria Group released version 4 of openEuphoria along with a new identity and mascot for the project. OpenEuphoria is currently available for Windows, Linux, macOS and three flavors of *BSD. Euphoria is a general-purpose high-level imperative-procedural interpreted language. A translator generates C source code and the GNU compiler collection (GCC) and Open Watcom compilers are supported. Alternatively, Euphoria programs may be bound with the interpreter to create stand-alone executables. A number of graphical user interface (GUI) libraries are supported including Win32lib and wrappers for wxWidgets, GTK+ and IUP. Euphoria has a simple built-in database and wrappers for a variety of other databases. The Euphoria language is a general purpose procedural language that focuses on simplicity, legibility, rapid development and performance via several means. Developed as a personal project to invent a programming language from scratch, Euphoria was created by Robert Craig on an Atari Mega-ST. Many design ideas for the language came from Craig's Master's thesis in computer science at the University of Toronto. Craig's thesis was heavily influenced by the work of John Backus on functional programming (FP) languages. Craig ported his original Atari implementation to the 16-bit DOS platform and Euphoria was first released, version 1.0, in July 1993 under a proprietary licence. The original Atari implementation is described by Craig as "primitive" and has not been publicly released. Euphoria continued to be developed and released by Craig via his company Rapid Deployment Software (RDS) and website rapideuphoria.com. In October 2006 RDS released version 3 of Euphoria and announced that henceforth Euphoria would be freely distributed under an open-source software licence. RDS continued to develop Euphoria, culminating with the release of version 3.1.1 in August, 2007. Subsequently, RDS ceased unilateral development of Euphoria and the openEuphoria Group took over ongoing development. The openEuphoria Group released version 4 in December, 2010 along with a new logo and mascot for the openEuphoria project. Version 3.1.1 remains an important milestone release, being the last version of Euphoria which supports the DOS platform. Euphoria is an acronym for End-User Programming with Hierarchical Objects for Robust Interpreted Applications although there is some suspicion that this is a backronym. The Euphoria interpreter was originally written in C. With the release of version 2.5 in November 2004 the Euphoria interpreter was split into two parts: a front-end parser, and a back-end interpreter. The front-end is now written in Euphoria (and used with the Euphoria-to-C translator and the Binder). The main back-end and run time library are written in C. Euphoria was conceived and developed with the following design goals and features: Euphoria is designed to readily facilitate handling of dynamic sets of data of varying types and is particularly useful for string and image processing. Euphoria has been used in artificial intelligence experiments, the study of mathematics, for teaching programming, and to implement fonts involving thousands of characters. A large part of the Euphoria interpreter is written in Euphoria. Euphoria has two basic data types: Euphoria has two additional data types predefined: There is no character string data type. Strings are represented by a sequence of integer values. However, because literal strings are so commonly used in programming, Euphoria interprets double-quote enclosed characters as a sequence of integers. Thus is seen as if the coder had written: which is the same as: Program comments start with a double hyphen -- and go through the end of line. The following code looks for an old item in a group of items. If found, it removes it by concatenating all the elements before it with all the elements after it. Note that the first element in a sequence has the index one [1] and that $ refers to the length (i.e., total number of elements) of the sequence. The following modification to the above example replaces an old item with a new item. As the variables old and new have been defined as objects, they could be atoms or sequences. Type checking is not needed as the function will work with any sequence of data of any type and needs no external libraries. Furthermore, no pointers are involved and subscripts are automatically checked. Thus the function cannot access memory out-of-bounds. There is no need to allocate or deallocate memory explicitly and no chance of a memory leak. The line shows some of the sequence handling facilities. A sequence may contain a set of any types, and this can be sliced (to take a subset of the data in a sequence) and concatenated in expressions with no need for special functions. Arguments to routines are always passed by value; there is no pass-by-reference facility. However, parameters are allowed to be modified locally (i.e., within the callee) which is implemented very efficiently as sequences have automatic copy-on-write semantics. In other words, when you pass a sequence to a routine, initially only a reference to it is passed, but at the point the routine modifies this sequence parameter the sequence is copied and the routine updates only a copy of the original. Free downloads of Euphoria for the various platforms, packages, Windows IDE, Windows API libraries, a cross-platform GTK3 wrapper for Linux and Windows, graphics libraries (DOS, OpenGL, etc.).
[ { "paragraph_id": 0, "text": "Euphoria is a programming language created by Robert Craig of Rapid Deployment Software in Toronto, Ontario, Canada. Initially developed (though not publicly released) on the Atari ST, the first commercial release was for MS-DOS as proprietary software. In 2006, with the release of version 3, Euphoria became open-source software. The openEuphoria Group continues to administer and develop the project. In December 2010, the openEuphoria Group released version 4 of openEuphoria along with a new identity and mascot for the project. OpenEuphoria is currently available for Windows, Linux, macOS and three flavors of *BSD.", "title": "" }, { "paragraph_id": 1, "text": "Euphoria is a general-purpose high-level imperative-procedural interpreted language. A translator generates C source code and the GNU compiler collection (GCC) and Open Watcom compilers are supported. Alternatively, Euphoria programs may be bound with the interpreter to create stand-alone executables. A number of graphical user interface (GUI) libraries are supported including Win32lib and wrappers for wxWidgets, GTK+ and IUP. Euphoria has a simple built-in database and wrappers for a variety of other databases.", "title": "" }, { "paragraph_id": 2, "text": "The Euphoria language is a general purpose procedural language that focuses on simplicity, legibility, rapid development and performance via several means.", "title": "Overview" }, { "paragraph_id": 3, "text": "Developed as a personal project to invent a programming language from scratch, Euphoria was created by Robert Craig on an Atari Mega-ST. Many design ideas for the language came from Craig's Master's thesis in computer science at the University of Toronto. Craig's thesis was heavily influenced by the work of John Backus on functional programming (FP) languages.", "title": "History" }, { "paragraph_id": 4, "text": "Craig ported his original Atari implementation to the 16-bit DOS platform and Euphoria was first released, version 1.0, in July 1993 under a proprietary licence. The original Atari implementation is described by Craig as \"primitive\" and has not been publicly released. Euphoria continued to be developed and released by Craig via his company Rapid Deployment Software (RDS) and website rapideuphoria.com. In October 2006 RDS released version 3 of Euphoria and announced that henceforth Euphoria would be freely distributed under an open-source software licence.", "title": "History" }, { "paragraph_id": 5, "text": "RDS continued to develop Euphoria, culminating with the release of version 3.1.1 in August, 2007. Subsequently, RDS ceased unilateral development of Euphoria and the openEuphoria Group took over ongoing development. The openEuphoria Group released version 4 in December, 2010 along with a new logo and mascot for the openEuphoria project.", "title": "History" }, { "paragraph_id": 6, "text": "Version 3.1.1 remains an important milestone release, being the last version of Euphoria which supports the DOS platform.", "title": "History" }, { "paragraph_id": 7, "text": "Euphoria is an acronym for End-User Programming with Hierarchical Objects for Robust Interpreted Applications although there is some suspicion that this is a backronym.", "title": "History" }, { "paragraph_id": 8, "text": "The Euphoria interpreter was originally written in C. With the release of version 2.5 in November 2004 the Euphoria interpreter was split into two parts: a front-end parser, and a back-end interpreter. The front-end is now written in Euphoria (and used with the Euphoria-to-C translator and the Binder). The main back-end and run time library are written in C.", "title": "History" }, { "paragraph_id": 9, "text": "Euphoria was conceived and developed with the following design goals and features:", "title": "Features" }, { "paragraph_id": 10, "text": "Euphoria is designed to readily facilitate handling of dynamic sets of data of varying types and is particularly useful for string and image processing. Euphoria has been used in artificial intelligence experiments, the study of mathematics, for teaching programming, and to implement fonts involving thousands of characters. A large part of the Euphoria interpreter is written in Euphoria.", "title": "Use" }, { "paragraph_id": 11, "text": "Euphoria has two basic data types:", "title": "Data types" }, { "paragraph_id": 12, "text": "Euphoria has two additional data types predefined:", "title": "Data types" }, { "paragraph_id": 13, "text": "There is no character string data type. Strings are represented by a sequence of integer values. However, because literal strings are so commonly used in programming, Euphoria interprets double-quote enclosed characters as a sequence of integers. Thus", "title": "Data types" }, { "paragraph_id": 14, "text": "is seen as if the coder had written:", "title": "Data types" }, { "paragraph_id": 15, "text": "which is the same as:", "title": "Data types" }, { "paragraph_id": 16, "text": "Program comments start with a double hyphen -- and go through the end of line.", "title": "Examples" }, { "paragraph_id": 17, "text": "The following code looks for an old item in a group of items. If found, it removes it by concatenating all the elements before it with all the elements after it. Note that the first element in a sequence has the index one [1] and that $ refers to the length (i.e., total number of elements) of the sequence.", "title": "Examples" }, { "paragraph_id": 18, "text": "The following modification to the above example replaces an old item with a new item. As the variables old and new have been defined as objects, they could be atoms or sequences. Type checking is not needed as the function will work with any sequence of data of any type and needs no external libraries.", "title": "Examples" }, { "paragraph_id": 19, "text": "Furthermore, no pointers are involved and subscripts are automatically checked. Thus the function cannot access memory out-of-bounds. There is no need to allocate or deallocate memory explicitly and no chance of a memory leak.", "title": "Examples" }, { "paragraph_id": 20, "text": "The line", "title": "Examples" }, { "paragraph_id": 21, "text": "shows some of the sequence handling facilities. A sequence may contain a set of any types, and this can be sliced (to take a subset of the data in a sequence) and concatenated in expressions with no need for special functions.", "title": "Examples" }, { "paragraph_id": 22, "text": "Arguments to routines are always passed by value; there is no pass-by-reference facility. However, parameters are allowed to be modified locally (i.e., within the callee) which is implemented very efficiently as sequences have automatic copy-on-write semantics. In other words, when you pass a sequence to a routine, initially only a reference to it is passed, but at the point the routine modifies this sequence parameter the sequence is copied and the routine updates only a copy of the original.", "title": "Parameter passing" }, { "paragraph_id": 23, "text": "Free downloads of Euphoria for the various platforms, packages, Windows IDE, Windows API libraries, a cross-platform GTK3 wrapper for Linux and Windows, graphics libraries (DOS, OpenGL, etc.).", "title": "External links" } ]
Euphoria is a programming language created by Robert Craig of Rapid Deployment Software in Toronto, Ontario, Canada. Initially developed on the Atari ST, the first commercial release was for MS-DOS as proprietary software. In 2006, with the release of version 3, Euphoria became open-source software. The openEuphoria Group continues to administer and develop the project. In December 2010, the openEuphoria Group released version 4 of openEuphoria along with a new identity and mascot for the project. OpenEuphoria is currently available for Windows, Linux, macOS and three flavors of *BSD. Euphoria is a general-purpose high-level imperative-procedural interpreted language. A translator generates C source code and the GNU compiler collection (GCC) and Open Watcom compilers are supported. Alternatively, Euphoria programs may be bound with the interpreter to create stand-alone executables. A number of graphical user interface (GUI) libraries are supported including Win32lib and wrappers for wxWidgets, GTK+ and IUP. Euphoria has a simple built-in database and wrappers for a variety of other databases.
2022-12-12T02:39:11Z
[ "Template:Infobox programming language", "Template:Val", "Template:Reflist", "Template:Official website", "Template:BASIC", "Template:According to whom", "Template:Citation needed", "Template:Tmath", "Template:Commons category" ]
https://en.wikipedia.org/wiki/Euphoria_(programming_language)
9,649
Energy
In physics, energy (from Ancient Greek ἐνέργεια (enérgeia) 'activity') is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J). Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, and the internal energy contained within a thermodynamic system. All living organisms constantly take in and release energy. Due to mass–energy equivalence, any object that has mass when stationary (called rest mass) also has an equivalent amount of energy whose form is called rest energy, and any additional energy (of any form) acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The Earth's climate and ecosystems processes are driven by the energy the planet receives from the Sun (although a small amount is also contributed by geothermal energy). The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, and generally is a function of the position of an object within a field or may be stored in the field itself. While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples. The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation', which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy". In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat. These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. In 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. In the International System of Units (SI), the unit of energy is the joule, named after Joule. It is a derived unit. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce. In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept. Work, a function of energy, is force times distance. This says that the work ( W {\displaystyle W} ) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball. The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics. Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law. In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse. Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy. In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy. Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action. All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria and some of the energy is used to convert ADP into ATP: The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work: It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat. In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy. Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement. In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms). In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. In quantum mechanics, energy is defined in terms of the energy operator (Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: E = h ν {\displaystyle E=h\nu } (where h {\displaystyle h} is the Planck constant and ν {\displaystyle \nu } the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons. When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body: where For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons. In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation. Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts). Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work). Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy. There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever. Energy is also transferred from potential energy ( E p {\displaystyle E_{p}} ) to kinetic energy ( E k {\displaystyle E_{k}} ) and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: The equation can then be simplified further since E p = m g h {\displaystyle E_{p}=mgh} (mass times acceleration due to gravity times the height) and E k = 1 2 m v 2 {\textstyle E_{k}={\frac {1}{2}}mv^{2}} (half mass times velocity squared). Then the total amount of energy can be found by adding E p + E k = E total {\displaystyle E_{p}+E_{k}=E_{\text{total}}} . Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information). Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since c 2 {\displaystyle c^{2}} is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ 9 × 10 16 {\displaystyle 9\times 10^{16}} joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws. Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal). As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease. The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant. While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system. Richard Feynman said during a 1961 lecture: There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same. Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa. This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured. Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it. In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics). In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena. Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy. Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law: where E {\displaystyle E} is the amount of energy transferred, W {\displaystyle W} represents the work done on or by the system, and Q {\displaystyle Q} represents the heat flow into or out of the system. As a simplification, the heat term, Q {\displaystyle Q} , can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes, This simplified equation is the one used to define the joule, for example. Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by E matter {\displaystyle E_{\text{matter}}} , one may write Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone. The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system). This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by where δ Q {\displaystyle \delta Q} is the heat supplied to the system and δ W {\displaystyle \delta W} is the work applied to the system. The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average. This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production.
[ { "paragraph_id": 0, "text": "In physics, energy (from Ancient Greek ἐνέργεια (enérgeia) 'activity') is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J).", "title": "" }, { "paragraph_id": 1, "text": "Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, and the internal energy contained within a thermodynamic system. All living organisms constantly take in and release energy.", "title": "" }, { "paragraph_id": 2, "text": "Due to mass–energy equivalence, any object that has mass when stationary (called rest mass) also has an equivalent amount of energy whose form is called rest energy, and any additional energy (of any form) acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy.", "title": "" }, { "paragraph_id": 3, "text": "Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The Earth's climate and ecosystems processes are driven by the energy the planet receives from the Sun (although a small amount is also contributed by geothermal energy).", "title": "" }, { "paragraph_id": 4, "text": "The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, and generally is a function of the position of an object within a field or may be stored in the field itself.", "title": "Forms" }, { "paragraph_id": 5, "text": "While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples.", "title": "Forms" }, { "paragraph_id": 6, "text": "The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation', which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.", "title": "History" }, { "paragraph_id": 7, "text": "In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called \"energy\".", "title": "History" }, { "paragraph_id": 8, "text": "In 1807, Thomas Young was possibly the first to use the term \"energy\" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described \"kinetic energy\" in 1829 in its modern sense, and in 1853, William Rankine coined the term \"potential energy\". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.", "title": "History" }, { "paragraph_id": 9, "text": "These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.", "title": "History" }, { "paragraph_id": 10, "text": "In 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the \"Joule apparatus\": a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle.", "title": "Units of measure" }, { "paragraph_id": 11, "text": "In the International System of Units (SI), the unit of energy is the joule, named after Joule. It is a derived unit. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.", "title": "Units of measure" }, { "paragraph_id": 12, "text": "The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.", "title": "Units of measure" }, { "paragraph_id": 13, "text": "In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.", "title": "Scientific use" }, { "paragraph_id": 14, "text": "Work, a function of energy, is force times distance.", "title": "Scientific use" }, { "paragraph_id": 15, "text": "This says that the work ( W {\\displaystyle W} ) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.", "title": "Scientific use" }, { "paragraph_id": 16, "text": "The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics.", "title": "Scientific use" }, { "paragraph_id": 17, "text": "Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).", "title": "Scientific use" }, { "paragraph_id": 18, "text": "Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.", "title": "Scientific use" }, { "paragraph_id": 19, "text": "In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse. Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.", "title": "Scientific use" }, { "paragraph_id": 20, "text": "In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a \"feel\" for the use of a given amount of energy.", "title": "Scientific use" }, { "paragraph_id": 21, "text": "Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action.", "title": "Scientific use" }, { "paragraph_id": 22, "text": "All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria", "title": "Scientific use" }, { "paragraph_id": 23, "text": "and some of the energy is used to convert ADP into ATP:", "title": "Scientific use" }, { "paragraph_id": 24, "text": "The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of \"energy currency\", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:", "title": "Scientific use" }, { "paragraph_id": 25, "text": "It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe (\"the surroundings\"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat.", "title": "Scientific use" }, { "paragraph_id": 26, "text": "In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy.", "title": "Scientific use" }, { "paragraph_id": 27, "text": "Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement.", "title": "Scientific use" }, { "paragraph_id": 28, "text": "In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms).", "title": "Scientific use" }, { "paragraph_id": 29, "text": "In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.", "title": "Scientific use" }, { "paragraph_id": 30, "text": "", "title": "Scientific use" }, { "paragraph_id": 31, "text": "In quantum mechanics, energy is defined in terms of the energy operator (Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: E = h ν {\\displaystyle E=h\\nu } (where h {\\displaystyle h} is the Planck constant and ν {\\displaystyle \\nu } the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.", "title": "Scientific use" }, { "paragraph_id": 32, "text": "When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:", "title": "Scientific use" }, { "paragraph_id": 33, "text": "where", "title": "Scientific use" }, { "paragraph_id": 34, "text": "For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.", "title": "Scientific use" }, { "paragraph_id": 35, "text": "In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.", "title": "Scientific use" }, { "paragraph_id": 36, "text": "Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system (\"mass manifestations\"), and is also responsible for the potential ability of the system to perform work or heating (\"energy manifestations\"), subject to the limitations of other physical laws.", "title": "Scientific use" }, { "paragraph_id": 37, "text": "In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts).", "title": "Scientific use" }, { "paragraph_id": 38, "text": "Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work).", "title": "Transformation" }, { "paragraph_id": 39, "text": "Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.", "title": "Transformation" }, { "paragraph_id": 40, "text": "There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.", "title": "Transformation" }, { "paragraph_id": 41, "text": "Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being \"released\" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to \"store\" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time.", "title": "Transformation" }, { "paragraph_id": 42, "text": "Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.", "title": "Transformation" }, { "paragraph_id": 43, "text": "Energy is also transferred from potential energy ( E p {\\displaystyle E_{p}} ) to kinetic energy ( E k {\\displaystyle E_{k}} ) and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:", "title": "Transformation" }, { "paragraph_id": 44, "text": "The equation can then be simplified further since E p = m g h {\\displaystyle E_{p}=mgh} (mass times acceleration due to gravity times the height) and E k = 1 2 m v 2 {\\textstyle E_{k}={\\frac {1}{2}}mv^{2}} (half mass times velocity squared). Then the total amount of energy can be found by adding E p + E k = E total {\\displaystyle E_{p}+E_{k}=E_{\\text{total}}} .", "title": "Transformation" }, { "paragraph_id": 45, "text": "Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information).", "title": "Transformation" }, { "paragraph_id": 46, "text": "Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since c 2 {\\displaystyle c^{2}} is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ 9 × 10 16 {\\displaystyle 9\\times 10^{16}} joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws.", "title": "Transformation" }, { "paragraph_id": 47, "text": "Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).", "title": "Transformation" }, { "paragraph_id": 48, "text": "As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease.", "title": "Transformation" }, { "paragraph_id": 49, "text": "The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.", "title": "Conservation of energy" }, { "paragraph_id": 50, "text": "While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system.", "title": "Conservation of energy" }, { "paragraph_id": 51, "text": "Richard Feynman said during a 1961 lecture:", "title": "Conservation of energy" }, { "paragraph_id": 52, "text": "There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.", "title": "Conservation of energy" }, { "paragraph_id": 53, "text": "Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.", "title": "Conservation of energy" }, { "paragraph_id": 54, "text": "This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured.", "title": "Conservation of energy" }, { "paragraph_id": 55, "text": "Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.", "title": "Conservation of energy" }, { "paragraph_id": 56, "text": "In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by", "title": "Conservation of energy" }, { "paragraph_id": 57, "text": "which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).", "title": "Conservation of energy" }, { "paragraph_id": 58, "text": "In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena.", "title": "Conservation of energy" }, { "paragraph_id": 59, "text": "Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy.", "title": "Energy transfer" }, { "paragraph_id": 60, "text": "Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:", "title": "Energy transfer" }, { "paragraph_id": 61, "text": "where E {\\displaystyle E} is the amount of energy transferred, W {\\displaystyle W} represents the work done on or by the system, and Q {\\displaystyle Q} represents the heat flow into or out of the system. As a simplification, the heat term, Q {\\displaystyle Q} , can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes,", "title": "Energy transfer" }, { "paragraph_id": 62, "text": "This simplified equation is the one used to define the joule, for example.", "title": "Energy transfer" }, { "paragraph_id": 63, "text": "Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by E matter {\\displaystyle E_{\\text{matter}}} , one may write", "title": "Energy transfer" }, { "paragraph_id": 64, "text": "Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.", "title": "Thermodynamics" }, { "paragraph_id": 65, "text": "The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as", "title": "Thermodynamics" }, { "paragraph_id": 66, "text": "where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).", "title": "Thermodynamics" }, { "paragraph_id": 67, "text": "This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by", "title": "Thermodynamics" }, { "paragraph_id": 68, "text": "where δ Q {\\displaystyle \\delta Q} is the heat supplied to the system and δ W {\\displaystyle \\delta W} is the work applied to the system.", "title": "Thermodynamics" }, { "paragraph_id": 69, "text": "The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average.", "title": "Thermodynamics" }, { "paragraph_id": 70, "text": "This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between \"new\" and \"old\" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production.", "title": "Thermodynamics" } ]
In physics, energy is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J). Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object, the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, and the internal energy contained within a thermodynamic system. All living organisms constantly take in and release energy. Due to mass–energy equivalence, any object that has mass when stationary also has an equivalent amount of energy whose form is called rest energy, and any additional energy acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The Earth's climate and ecosystems processes are driven by the energy the planet receives from the Sun.
2001-11-04T11:46:24Z
2023-10-17T22:04:42Z
[ "Template:Colend", "Template:Lang-grc", "Template:Lang-lat", "Template:NumBlk", "Template:Blockquote", "Template:Reflist", "Template:Cite book", "Template:Infobox physical quantity", "Template:Etymology", "Template:Block indent", "Template:Portal", "Template:Cols", "Template:Citation", "Template:Sister project links", "Template:Redirect", "Template:More citations needed section", "Template:Classical mechanics", "Template:Short description", "Template:Thermodynamics", "Template:Cite web", "Template:Cite journal", "Template:Prone to spam", "Template:About", "Template:Pp-semi-indef", "Template:Pp-move-indef", "Template:Refbegin", "Template:Refend", "Template:Footer energy", "Template:Natural resources", "Template:Main", "Template:Anchor", "Template:Clear", "Template:Use British English", "Template:Authority control", "Template:Nature nav", "Template:Webarchive", "Template:ISBN", "Template:Curlie" ]
https://en.wikipedia.org/wiki/Energy
9,653
Expected value
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable X is often denoted by E(X), E[X], or EX, with E also often stylized as E or E . {\displaystyle \mathbb {E} .} The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players, who have to end their game before it is properly finished. This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it. In Dutch mathematician Christiaan Huygens' book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability. In the foreword to his treatise, Huygens wrote: It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs. During his visit to France in 1655, Huygens learned about de Méré's Problem. From his correspondence with Carcavine a year later (in 1656), he realized his method was essentially the same as Pascal's. Therefore, he knew about Pascal's priority in this subject before his book went to press in 1657. In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables. Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2. More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly: … this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope. The use of the letter E to denote "expected value" goes back to W. A. Whitworth in 1901. The symbol has since become popular for English writers. In German, E stands for Erwartungswert, in Spanish for esperanza matemática, and in French for espérance mathématique. When "E" is used to denote "expected value", authors use a variety of stylizations: the expectation operator can be stylized as E (upright), E (italic), or E {\displaystyle \mathbb {E} } (in blackboard bold), while a variety of bracket notations (such as E(X), E[X], and EX) are all used. Another popular notation is μX, whereas ⟨X⟩, ⟨X⟩av, and X ¯ {\displaystyle {\overline {X}}} are commonly used in physics, and M(X) in Russian-language literature. As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language. Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector X. It is defined component by component, as E[X]i = E[Xi]. Similarly, one may define the expected value of a random matrix X with components Xij by E[X]ij = E[Xij]. Consider a random variable X with a finite list x1, ..., xk of possible outcomes, each of which (respectively) has probability p1, ..., pk of occurring. The expectation of X is defined as Since the probabilities must satisfy p1 + ⋅⋅⋅ + pk = 1, it is natural to interpret E[X] as a weighted average of the xi values, with weights given by their probabilities pi. In the special case that all possible outcomes are equiprobable (that is, p1 = ⋅⋅⋅ = pk), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others. Informally, the expectation of a random variable with a countable set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that where x1, x2, ... are the possible outcomes of the random variable X and p1, p2, ... are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context. However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem of mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely. For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands. In the alternative case that the infinite sum does not converge absolutely, one says the random variable does not have finite expectation. Now consider a random variable X which has a probability density function given by a function f on the real number line. This means that the probability of X taking on a value in any given open interval is given by the integral of f over that interval. The expectation of X is then given by the integral A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by various authors. Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of X is given by the Cauchy distribution Cauchy(0, π), so that f(x) = (x + π). It is straightforward to compute in this case that The limit of this expression as a → −∞ and b → ∞ does not exist: if the limits are taken so that a = −b, then the limit is zero, while if the constraint 2a = −b is taken, then the limit is ln(2). To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with E[X] left undefined otherwise. However, measure-theoretic notions as given below can be used to give a systematic definition of E[X] for more general random variables X. All definitions of the expected value may be expressed in the language of measure theory. In general, if X is a real-valued random variable defined on a probability space (Ω, Σ, P), then the expected value of X, denoted by E[X], is defined as the Lebesgue integral Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of X is defined via weighted averages of approximations of X which take on finitely many values. Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical with the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable X is said to be absolutely continuous if any of the following conditions are satisfied: These conditions are all equivalent, although this is nontrivial to establish. In this definition, f is called the probability density function of X (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration, combined with the law of the unconscious statistician, it follows that for any absolutely continuous random variable X. The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable. The expected value of any real-valued random variable X {\displaystyle X} can also be defined on the graph of its cumulative distribution function F {\displaystyle F} by a nearby equality of areas. In fact, E [ X ] = μ {\displaystyle \operatorname {E} [X]=\mu } with a real number μ {\displaystyle \mu } if and only if the two surfaces in the x {\displaystyle x} - y {\displaystyle y} -plane, described by respectively, have the same finite area, i.e. if and both improper Riemann integrals converge. Finally, this is equivalent to the representation also with convergent integrals. Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of ±∞. This is intuitive, for example, in the case of the St. Petersburg paradox, in which one considers a random variable with possible outcomes xi = 2, with associated probabilities pi = 2, for i ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has It is natural to say that the expected value equals +∞. There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral. The first fundamental observation is that, whichever of the above definitions are followed, any nonnegative random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as +∞. The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable X, one defines the positive and negative parts by X = max(X, 0) and X = −min(X, 0). These are nonnegative random variables, and it can be directly checked that X = X − X. Since E[X] and E[X] are both then defined as either nonnegative numbers or +∞, it is then natural to define: According to this definition, E[X] exists and is finite if and only if E[X] and E[X] are both finite. Due to the formula |X| = X + X, this is the case if and only if E|X| is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations. The following table gives the expected values of some commonly occurring probability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references. The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like X ≥ 0 {\displaystyle X\geq 0} is true almost surely, when the probability measure attributes zero-mass to the complementary event { X < 0 } . {\displaystyle \left\{X<0\right\}.} Concentration inequalities control the likelihood of a random variable taking on large values. Markov's inequality is among the best-known and simplest to prove: for a nonnegative random variable X and any positive number a, it states that If X is any random variable with finite expectation, then Markov's inequality may be applied to the random variable |X−E[X]| to obtain Chebyshev's inequality where Var is the variance. These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within two standard deviations of the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%. The Kolmogorov inequality extends the Chebyshev inequality to the context of sums of random variables. The following three inequalities are of fundamental importance in the field of mathematical analysis and its applications to probability theory. The Hölder and Minkowski inequalities can be extended to general measure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces. In general, it is not the case that E [ X n ] → E [ X ] {\displaystyle \operatorname {E} [X_{n}]\to \operatorname {E} [X]} even if X n → X {\displaystyle X_{n}\to X} pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let U {\displaystyle U} be a random variable distributed uniformly on [ 0 , 1 ] . {\displaystyle [0,1].} For n ≥ 1 , {\displaystyle n\geq 1,} define a sequence of random variables with 1 { A } {\displaystyle {\mathbf {1} }\{A\}} being the indicator function of the event A . {\displaystyle A.} Then, it follows that X n → 0 {\displaystyle X_{n}\to 0} pointwise. But, E [ X n ] = n ⋅ P ( U ∈ [ 0 , 1 n ] ) = n ⋅ 1 n = 1 {\displaystyle \operatorname {E} [X_{n}]=n\cdot \operatorname {P} \left(U\in \left[0,{\tfrac {1}{n}}\right]\right)=n\cdot {\tfrac {1}{n}}=1} for each n . {\displaystyle n.} Hence, lim n → ∞ E [ X n ] = 1 ≠ 0 = E [ lim n → ∞ X n ] . {\displaystyle \lim _{n\to \infty }\operatorname {E} [X_{n}]=1\neq 0=\operatorname {E} \left[\lim _{n\to \infty }X_{n}\right].} Analogously, for general sequence of random variables { Y n : n ≥ 0 } , {\displaystyle \{Y_{n}:n\geq 0\},} the expected value operator is not σ {\displaystyle \sigma } -additive, i.e. An example is easily obtained by setting Y 0 = X 1 {\displaystyle Y_{0}=X_{1}} and Y n = X n + 1 − X n {\displaystyle Y_{n}=X_{n+1}-X_{n}} for n ≥ 1 , {\displaystyle n\geq 1,} where X n {\displaystyle X_{n}} is as in the previous example. A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below. The probability density function f X {\displaystyle f_{X}} of a scalar random variable X {\displaystyle X} is related to its characteristic function φ X {\displaystyle \varphi _{X}} by the inversion formula: For the expected value of g ( X ) {\displaystyle g(X)} (where g : R → R {\displaystyle g:{\mathbb {R} }\to {\mathbb {R} }} is a Borel function), we can use this inversion formula to obtain If E [ g ( X ) ] {\displaystyle \operatorname {E} [g(X)]} is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem, where is the Fourier transform of g ( x ) . {\displaystyle g(x).} The expression for E [ g ( X ) ] {\displaystyle \operatorname {E} [g(X)]} also follows directly from the Plancherel theorem. The expectation of a random variable plays an important role in a variety of contexts. In statistics, where one seeks estimates for unknown parameters based on available data gained from samples, the sample mean serves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a "good" estimator in being unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter. For a different example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function. It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies. The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions. To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller. This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. P ( X ∈ A ) = E [ 1 A ] , {\displaystyle \operatorname {P} ({X\in {\mathcal {A}}})=\operatorname {E} [{\mathbf {1} }_{\mathcal {A}}],} where 1 A {\displaystyle {\mathbf {1} }_{\mathcal {A}}} is the indicator function of the set A . {\displaystyle {\mathcal {A}}.} In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X]. Expected values can also be used to compute the variance, by means of the computational formula for the variance A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator A ^ {\displaystyle {\hat {A}}} operating on a quantum state vector | ψ ⟩ {\displaystyle |\psi \rangle } is written as ⟨ A ^ ⟩ = ⟨ ψ | A | ψ ⟩ . {\displaystyle \langle {\hat {A}}\rangle =\langle \psi |A|\psi \rangle .} The uncertainty in A ^ {\displaystyle {\hat {A}}} can be calculated by the formula ( Δ A ) 2 = ⟨ A ^ 2 ⟩ − ⟨ A ^ ⟩ 2 {\displaystyle (\Delta A)^{2}=\langle {\hat {A}}^{2}\rangle -\langle {\hat {A}}\rangle ^{2}} .
[ { "paragraph_id": 0, "text": "In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would \"expect\" to get in reality.", "title": "" }, { "paragraph_id": 1, "text": "The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration.", "title": "" }, { "paragraph_id": 2, "text": "The expected value of a random variable X is often denoted by E(X), E[X], or EX, with E also often stylized as E or E . {\\displaystyle \\mathbb {E} .}", "title": "" }, { "paragraph_id": 3, "text": "The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players, who have to end their game before it is properly finished. This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all.", "title": "History" }, { "paragraph_id": 4, "text": "He began to discuss the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.", "title": "History" }, { "paragraph_id": 5, "text": "In Dutch mathematician Christiaan Huygens' book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (see Huygens (1657)) \"De ratiociniis in ludo aleæ\" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability.", "title": "History" }, { "paragraph_id": 6, "text": "In the foreword to his treatise, Huygens wrote:", "title": "History" }, { "paragraph_id": 7, "text": "It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs.", "title": "History" }, { "paragraph_id": 8, "text": "During his visit to France in 1655, Huygens learned about de Méré's Problem. From his correspondence with Carcavine a year later (in 1656), he realized his method was essentially the same as Pascal's. Therefore, he knew about Pascal's priority in this subject before his book went to press in 1657.", "title": "History" }, { "paragraph_id": 9, "text": "In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables.", "title": "History" }, { "paragraph_id": 10, "text": "Neither Pascal nor Huygens used the term \"expectation\" in its modern sense. In particular, Huygens writes:", "title": "History" }, { "paragraph_id": 11, "text": "That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2.", "title": "History" }, { "paragraph_id": 12, "text": "More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract \"Théorie analytique des probabilités\", where the concept of expected value was defined explicitly:", "title": "History" }, { "paragraph_id": 13, "text": "… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.", "title": "History" }, { "paragraph_id": 14, "text": "The use of the letter E to denote \"expected value\" goes back to W. A. Whitworth in 1901. The symbol has since become popular for English writers. In German, E stands for Erwartungswert, in Spanish for esperanza matemática, and in French for espérance mathématique.", "title": "Notations" }, { "paragraph_id": 15, "text": "When \"E\" is used to denote \"expected value\", authors use a variety of stylizations: the expectation operator can be stylized as E (upright), E (italic), or E {\\displaystyle \\mathbb {E} } (in blackboard bold), while a variety of bracket notations (such as E(X), E[X], and EX) are all used.", "title": "Notations" }, { "paragraph_id": 16, "text": "Another popular notation is μX, whereas ⟨X⟩, ⟨X⟩av, and X ¯ {\\displaystyle {\\overline {X}}} are commonly used in physics, and M(X) in Russian-language literature.", "title": "Notations" }, { "paragraph_id": 17, "text": "As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language.", "title": "Definition" }, { "paragraph_id": 18, "text": "Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector X. It is defined component by component, as E[X]i = E[Xi]. Similarly, one may define the expected value of a random matrix X with components Xij by E[X]ij = E[Xij].", "title": "Definition" }, { "paragraph_id": 19, "text": "Consider a random variable X with a finite list x1, ..., xk of possible outcomes, each of which (respectively) has probability p1, ..., pk of occurring. The expectation of X is defined as", "title": "Definition" }, { "paragraph_id": 20, "text": "Since the probabilities must satisfy p1 + ⋅⋅⋅ + pk = 1, it is natural to interpret E[X] as a weighted average of the xi values, with weights given by their probabilities pi.", "title": "Definition" }, { "paragraph_id": 21, "text": "In the special case that all possible outcomes are equiprobable (that is, p1 = ⋅⋅⋅ = pk), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others.", "title": "Definition" }, { "paragraph_id": 22, "text": "Informally, the expectation of a random variable with a countable set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that", "title": "Definition" }, { "paragraph_id": 23, "text": "where x1, x2, ... are the possible outcomes of the random variable X and p1, p2, ... are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context.", "title": "Definition" }, { "paragraph_id": 24, "text": "However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem of mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely.", "title": "Definition" }, { "paragraph_id": 25, "text": "For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands. In the alternative case that the infinite sum does not converge absolutely, one says the random variable does not have finite expectation.", "title": "Definition" }, { "paragraph_id": 26, "text": "Now consider a random variable X which has a probability density function given by a function f on the real number line. This means that the probability of X taking on a value in any given open interval is given by the integral of f over that interval. The expectation of X is then given by the integral", "title": "Definition" }, { "paragraph_id": 27, "text": "A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by various authors.", "title": "Definition" }, { "paragraph_id": 28, "text": "Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of X is given by the Cauchy distribution Cauchy(0, π), so that f(x) = (x + π). It is straightforward to compute in this case that", "title": "Definition" }, { "paragraph_id": 29, "text": "The limit of this expression as a → −∞ and b → ∞ does not exist: if the limits are taken so that a = −b, then the limit is zero, while if the constraint 2a = −b is taken, then the limit is ln(2).", "title": "Definition" }, { "paragraph_id": 30, "text": "To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with E[X] left undefined otherwise. However, measure-theoretic notions as given below can be used to give a systematic definition of E[X] for more general random variables X.", "title": "Definition" }, { "paragraph_id": 31, "text": "All definitions of the expected value may be expressed in the language of measure theory. In general, if X is a real-valued random variable defined on a probability space (Ω, Σ, P), then the expected value of X, denoted by E[X], is defined as the Lebesgue integral", "title": "Definition" }, { "paragraph_id": 32, "text": "Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of X is defined via weighted averages of approximations of X which take on finitely many values. Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical with the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable X is said to be absolutely continuous if any of the following conditions are satisfied:", "title": "Definition" }, { "paragraph_id": 33, "text": "These conditions are all equivalent, although this is nontrivial to establish. In this definition, f is called the probability density function of X (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration, combined with the law of the unconscious statistician, it follows that", "title": "Definition" }, { "paragraph_id": 34, "text": "for any absolutely continuous random variable X. The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable.", "title": "Definition" }, { "paragraph_id": 35, "text": "The expected value of any real-valued random variable X {\\displaystyle X} can also be defined on the graph of its cumulative distribution function F {\\displaystyle F} by a nearby equality of areas. In fact, E [ X ] = μ {\\displaystyle \\operatorname {E} [X]=\\mu } with a real number μ {\\displaystyle \\mu } if and only if the two surfaces in the x {\\displaystyle x} - y {\\displaystyle y} -plane, described by", "title": "Definition" }, { "paragraph_id": 36, "text": "respectively, have the same finite area, i.e. if", "title": "Definition" }, { "paragraph_id": 37, "text": "and both improper Riemann integrals converge. Finally, this is equivalent to the representation", "title": "Definition" }, { "paragraph_id": 38, "text": "also with convergent integrals.", "title": "Definition" }, { "paragraph_id": 39, "text": "Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of ±∞. This is intuitive, for example, in the case of the St. Petersburg paradox, in which one considers a random variable with possible outcomes xi = 2, with associated probabilities pi = 2, for i ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has", "title": "Definition" }, { "paragraph_id": 40, "text": "It is natural to say that the expected value equals +∞.", "title": "Definition" }, { "paragraph_id": 41, "text": "There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral. The first fundamental observation is that, whichever of the above definitions are followed, any nonnegative random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as +∞. The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable X, one defines the positive and negative parts by X = max(X, 0) and X = −min(X, 0). These are nonnegative random variables, and it can be directly checked that X = X − X. Since E[X] and E[X] are both then defined as either nonnegative numbers or +∞, it is then natural to define:", "title": "Definition" }, { "paragraph_id": 42, "text": "According to this definition, E[X] exists and is finite if and only if E[X] and E[X] are both finite. Due to the formula |X| = X + X, this is the case if and only if E|X| is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations.", "title": "Definition" }, { "paragraph_id": 43, "text": "The following table gives the expected values of some commonly occurring probability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references.", "title": "Expected values of common distributions" }, { "paragraph_id": 44, "text": "The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters \"a.s.\" stand for \"almost surely\"—a central property of the Lebesgue integral. Basically, one says that an inequality like X ≥ 0 {\\displaystyle X\\geq 0} is true almost surely, when the probability measure attributes zero-mass to the complementary event { X < 0 } . {\\displaystyle \\left\\{X<0\\right\\}.}", "title": "Properties" }, { "paragraph_id": 45, "text": "", "title": "Properties" }, { "paragraph_id": 46, "text": "Concentration inequalities control the likelihood of a random variable taking on large values. Markov's inequality is among the best-known and simplest to prove: for a nonnegative random variable X and any positive number a, it states that", "title": "Properties" }, { "paragraph_id": 47, "text": "If X is any random variable with finite expectation, then Markov's inequality may be applied to the random variable |X−E[X]| to obtain Chebyshev's inequality", "title": "Properties" }, { "paragraph_id": 48, "text": "where Var is the variance. These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within two standard deviations of the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%. The Kolmogorov inequality extends the Chebyshev inequality to the context of sums of random variables.", "title": "Properties" }, { "paragraph_id": 49, "text": "The following three inequalities are of fundamental importance in the field of mathematical analysis and its applications to probability theory.", "title": "Properties" }, { "paragraph_id": 50, "text": "The Hölder and Minkowski inequalities can be extended to general measure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces.", "title": "Properties" }, { "paragraph_id": 51, "text": "In general, it is not the case that E [ X n ] → E [ X ] {\\displaystyle \\operatorname {E} [X_{n}]\\to \\operatorname {E} [X]} even if X n → X {\\displaystyle X_{n}\\to X} pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let U {\\displaystyle U} be a random variable distributed uniformly on [ 0 , 1 ] . {\\displaystyle [0,1].} For n ≥ 1 , {\\displaystyle n\\geq 1,} define a sequence of random variables", "title": "Properties" }, { "paragraph_id": 52, "text": "with 1 { A } {\\displaystyle {\\mathbf {1} }\\{A\\}} being the indicator function of the event A . {\\displaystyle A.} Then, it follows that X n → 0 {\\displaystyle X_{n}\\to 0} pointwise. But, E [ X n ] = n ⋅ P ( U ∈ [ 0 , 1 n ] ) = n ⋅ 1 n = 1 {\\displaystyle \\operatorname {E} [X_{n}]=n\\cdot \\operatorname {P} \\left(U\\in \\left[0,{\\tfrac {1}{n}}\\right]\\right)=n\\cdot {\\tfrac {1}{n}}=1} for each n . {\\displaystyle n.} Hence, lim n → ∞ E [ X n ] = 1 ≠ 0 = E [ lim n → ∞ X n ] . {\\displaystyle \\lim _{n\\to \\infty }\\operatorname {E} [X_{n}]=1\\neq 0=\\operatorname {E} \\left[\\lim _{n\\to \\infty }X_{n}\\right].}", "title": "Properties" }, { "paragraph_id": 53, "text": "Analogously, for general sequence of random variables { Y n : n ≥ 0 } , {\\displaystyle \\{Y_{n}:n\\geq 0\\},} the expected value operator is not σ {\\displaystyle \\sigma } -additive, i.e.", "title": "Properties" }, { "paragraph_id": 54, "text": "An example is easily obtained by setting Y 0 = X 1 {\\displaystyle Y_{0}=X_{1}} and Y n = X n + 1 − X n {\\displaystyle Y_{n}=X_{n+1}-X_{n}} for n ≥ 1 , {\\displaystyle n\\geq 1,} where X n {\\displaystyle X_{n}} is as in the previous example.", "title": "Properties" }, { "paragraph_id": 55, "text": "A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below.", "title": "Properties" }, { "paragraph_id": 56, "text": "The probability density function f X {\\displaystyle f_{X}} of a scalar random variable X {\\displaystyle X} is related to its characteristic function φ X {\\displaystyle \\varphi _{X}} by the inversion formula:", "title": "Properties" }, { "paragraph_id": 57, "text": "For the expected value of g ( X ) {\\displaystyle g(X)} (where g : R → R {\\displaystyle g:{\\mathbb {R} }\\to {\\mathbb {R} }} is a Borel function), we can use this inversion formula to obtain", "title": "Properties" }, { "paragraph_id": 58, "text": "If E [ g ( X ) ] {\\displaystyle \\operatorname {E} [g(X)]} is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem,", "title": "Properties" }, { "paragraph_id": 59, "text": "where", "title": "Properties" }, { "paragraph_id": 60, "text": "is the Fourier transform of g ( x ) . {\\displaystyle g(x).} The expression for E [ g ( X ) ] {\\displaystyle \\operatorname {E} [g(X)]} also follows directly from the Plancherel theorem.", "title": "Properties" }, { "paragraph_id": 61, "text": "The expectation of a random variable plays an important role in a variety of contexts.", "title": "Uses and applications" }, { "paragraph_id": 62, "text": "In statistics, where one seeks estimates for unknown parameters based on available data gained from samples, the sample mean serves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a \"good\" estimator in being unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter.", "title": "Uses and applications" }, { "paragraph_id": 63, "text": "For a different example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function.", "title": "Uses and applications" }, { "paragraph_id": 64, "text": "It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.", "title": "Uses and applications" }, { "paragraph_id": 65, "text": "The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.", "title": "Uses and applications" }, { "paragraph_id": 66, "text": "To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.", "title": "Uses and applications" }, { "paragraph_id": 67, "text": "This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. P ( X ∈ A ) = E [ 1 A ] , {\\displaystyle \\operatorname {P} ({X\\in {\\mathcal {A}}})=\\operatorname {E} [{\\mathbf {1} }_{\\mathcal {A}}],} where 1 A {\\displaystyle {\\mathbf {1} }_{\\mathcal {A}}} is the indicator function of the set A . {\\displaystyle {\\mathcal {A}}.}", "title": "Uses and applications" }, { "paragraph_id": 68, "text": "In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].", "title": "Uses and applications" }, { "paragraph_id": 69, "text": "Expected values can also be used to compute the variance, by means of the computational formula for the variance", "title": "Uses and applications" }, { "paragraph_id": 70, "text": "A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator A ^ {\\displaystyle {\\hat {A}}} operating on a quantum state vector | ψ ⟩ {\\displaystyle |\\psi \\rangle } is written as ⟨ A ^ ⟩ = ⟨ ψ | A | ψ ⟩ . {\\displaystyle \\langle {\\hat {A}}\\rangle =\\langle \\psi |A|\\psi \\rangle .} The uncertainty in A ^ {\\displaystyle {\\hat {A}}} can be calculated by the formula ( Δ A ) 2 = ⟨ A ^ 2 ⟩ − ⟨ A ^ ⟩ 2 {\\displaystyle (\\Delta A)^{2}=\\langle {\\hat {A}}^{2}\\rangle -\\langle {\\hat {A}}\\rangle ^{2}} .", "title": "Uses and applications" } ]
In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable X is often denoted by E(X), E[X], or EX, with E also often stylized as E or E .
2001-08-08T18:09:02Z
2023-12-27T10:56:56Z
[ "Template:See also", "Template:Probability fundamentals", "Template:Mvar", "Template:Math", "Template:Cite book", "Template:Cite journal", "Template:Erratum", "Template:Refend", "Template:Short description", "Template:About", "Template:Reflist", "Template:Anchor", "Template:Pb", "Template:Refbegin", "Template:Theory of probability distributions", "Template:Authority control", "Template:Redirect", "Template:Quote", "Template:Dice", "Template:Sfnm", "Template:Frac2", "Template:Cite web", "Template:TOC limit", "Template:Blockquote", "Template:Cn" ]
https://en.wikipedia.org/wiki/Expected_value
9,656
Electric light
An electric light, lamp, or light bulb is an electrical component that produces light. It is the most common form of artificial lighting. Lamps usually have a base made of ceramic, metal, glass, or plastic, which secures the lamp in the socket of a light fixture, which is often called a "lamp" as well. The electrical connection to the socket may be made with a screw-thread base, two metal pins, two metal caps or a bayonet mount. The three main categories of electric lights are incandescent lamps, which produce light by a filament heated white-hot by electric current, gas-discharge lamps, which produce light by means of an electric arc through a gas, such as fluorescent lamps, and LED lamps, which produce light by a flow of electrons across a band gap in a semiconductor. The energy efficiency of electric lighting has increased radically since the first demonstration of arc lamps and the incandescent light bulb of the 19th century. Modern electric light sources come in a profusion of types and sizes adapted to many applications. Most modern electric lighting is powered by centrally generated electric power, but lighting may also be powered by mobile or standby electric generators or battery systems. Battery-powered light is often reserved for when and where stationary lights fail, often in the form of flashlights or electric lanterns, as well as in vehicles. Before electric lighting became common in the early 20th century, people used candles, gas lights, oil lamps, and fires. During 1799–1800, Alessandro Volta created the voltaic pile, the first electric battery. Current from these batteries could heat copper wire to incandescence. Vasily Vladimirovich Petrov developed the first persistent electric arc in 1802, and English chemist Humphry Davy gave a practical demonstration of an arc light in 1806. In 1840, Warren de la Rue enclosed a platinum coil in a vacuum tube and passed an electric current through it, thus creating one of the world's first electric light bulbs. The design was based on the concept that the high melting point of platinum would allow it to operate at high temperatures and that the evacuated chamber would contain fewer gas molecules to react with the platinum, improving its longevity. Although it was an efficient design, the cost of the platinum made it impractical for commercial use. The late 1870s and 1880s were marked by intense competition and innovation, with inventors like Joseph Swan in the UK and Thomas Edison in the US independently developing functional incandescent lamps. Swan's bulbs, based on designs by William Staite, were successful, but the filaments were too thick. Edison worked to create bulbs with thinner filaments, leading to a better design. The rivalry between Swan and Edison eventually led to a merger, forming the Edison and Swan Electric Light Company. By the early twentieth century these had completely replaced arc lamps. While the ability of wires to illuminate when supplied with current was first discovered during the Enlightenment, it took more than a century of continuous and incremental improvement, including numerous designs, patents, and resulting intellectual property disputes, until incandescent light bulbs became commercially available in the 1920s. The first home to be lit by an electric light was Underhill, the home of Joseph Swan, around 1880. The turn of the century saw further improvements in bulb longevity and efficiency, notably with the introduction of the tungsten filament by William D. Coolidge, who applied for a patent in 1912. This innovation became a standard for incandescent bulbs for many years. In 1910, Georges Claude introduced the first neon light, paving the way for neon signs which would become ubiquitous in advertising. In 1934, Arthur Compton, a renowned physicist and GE consultant, reported to the GE lamp department on successful experiments with fluorescent lighting at General Electric Co., Ltd. in Great Britain (unrelated to General Electric in the United States). Stimulated by this report, and with all of the key elements available, a team led by George E. Inman built a prototype fluorescent lamp in 1934 at General Electric’s Nela Park (Ohio) engineering laboratory. This was not a trivial exercise; as noted by Arthur A. Bright, "A great deal of experimentation had to be done on lamp sizes and shapes, cathode construction, gas pressures of both argon and mercury vapor, colors of fluorescent powders, methods of attaching them to the inside of the tube, and other details of the lamp and its auxiliaries before the new device was ready for the public." The first practical LED arrived in 1962. In the United States, incandescent, halogen and compact fluorescent light bulbs will stop being sold effective as of August 2023, due to a ban by the U.S. Department of Energy. Compact fluorescent bulbs are included in the ban because of their toxic mercury that can be released into the home if broken and problems with disposal of mercury-containing bulbs in landfills. In its modern form, the incandescent light bulb consists of a coiled filament of tungsten sealed in a globular glass chamber, either a vacuum or full of an inert gas such as argon. When an electric current is connected, the tungsten is heated to 2,000 to 3,300 K (1,730 to 3,030 °C; 3,140 to 5,480 °F) and glows, emitting light that approximates a continuous spectrum. Incandescent bulbs are highly inefficient, in that just 2–5% of the energy consumed is emitted as visible, usable light. The remaining 95% is lost as heat. In warmer climates, the emitted heat must then be removed, putting additional pressure on ventilation or air conditioning systems. In colder weather, the heat byproduct has some value, and has been successfully harnessed for warming in devices such as heat lamps. Incandescent bulbs are nonetheless being phased out in favor of technologies like CFLs and LED bulbs in many countries due to their low energy efficiency. The European Commission estimated in 2012 that a complete ban on incandescent bulbs would contribute 5 to 10 billion euros to the economy and save 15 billion metric tonnes of carbon dioxide emissions. Halogen lamps are usually much smaller than standard incandescent lamps, because for successful operation a bulb temperature over 200 °C is generally necessary. For this reason, most have a bulb of fused silica (quartz) or aluminosilicate glass. This is often sealed inside an additional layer of glass. The outer glass is a safety precaution, to reduce ultraviolet emission and to contain hot glass shards should the inner envelope explode during operation. Oily residue from fingerprints may cause a hot quartz envelope to shatter due to excessive heat buildup at the contamination site. The risk of burns or fire is also greater with bare bulbs, leading to their prohibition in some places, unless enclosed by the luminaire. Those designed for 12- or 24-volt operation have compact filaments, useful for good optical control. Also, they have higher efficacies (lumens per watt) and longer lives than non-halogen types. The light output remains almost constant throughout their life. Fluorescent lamps consist of a glass tube that contains mercury vapour or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet photons. They have much higher efficiency than incandescent lamps. For the same amount of light generated, they typically use around one-quarter to one-third the power of an incandescent. The typical luminous efficacy of fluorescent lighting systems is 50–100 lumens per watt, several times the efficacy of incandescent bulbs with comparable light output. Fluorescent lamp fixtures are more costly than incandescent lamps, because they require a ballast to regulate the current through the lamp, but the lower energy cost typically offsets the higher initial cost. Compact fluorescent lamps are available in the same popular sizes as incandescent lamps and are used as an energy-saving alternative in homes. Because they contain mercury, many fluorescent lamps are classified as hazardous waste. The United States Environmental Protection Agency recommends that fluorescent lamps be segregated from general waste for recycling or safe disposal, and some jurisdictions require recycling of them. The solid-state light-emitting diode (LED) has been popular as an indicator light in consumer electronics and professional audio gear since the 1970s. In the 2000s, efficacy and output have risen to the point where LEDs are now being used in lighting applications such as car headlights and brake lights, in flashlights and bicycle lights, as well as in decorative applications, such as holiday lighting. Indicator LEDs are known for their extremely long life, up to 100,000 hours, but lighting LEDs are operated much less conservatively, and consequently have shorter lives. LED technology is useful for lighting designers, because of its low power consumption, low heat generation, instantaneous on/off control, and in the case of single color LEDs, continuity of color throughout the life of the diode and relatively low cost of manufacture. LED lifetime depends strongly on the temperature of the diode. Operating an LED lamp in conditions that increase the internal temperature can greatly shorten the lamp's life. Carbon arc lamps consist of two carbon rod electrodes in open air, supplied by a current-limiting ballast. The electric arc is struck by touching the rod tips then separating them. The ensuing arc produces a white-hot plasma between the rod tips. These lamps have higher efficacy than filament lamps, but the carbon rods are short-lived and require constant adjustment in use, as the intense heat of the arc erodes them. The lamps produce significant ultraviolet output, they require ventilation when used indoors, and due to their intensity they need protection from direct sight. Invented by Humphry Davy around 1805, the carbon arc was the first practical electric light. It was used commercially beginning in the 1870s for large building and street lighting until it was superseded in the early 20th century by the incandescent light. Carbon arc lamps operate at high power and produce high intensity white light. They also are a point source of light. They remained in use in limited applications that required these properties, such as movie projectors, stage lighting, and searchlights, until after World War II. A discharge lamp has a glass or silica envelope containing two metal electrodes separated by a gas. Gases used include, neon, argon, xenon, sodium, metal halides, and mercury. The core operating principle is much the same as the carbon arc lamp, but the term "arc lamp" normally refers to carbon arc lamps, with more modern types of gas discharge lamp normally called discharge lamps. With some discharge lamps, very high voltage is used to strike the arc. This requires an electrical circuit called an igniter, which is part of the electrical ballast circuitry. After the arc is struck, the internal resistance of the lamp drops to a low level, and the ballast limits the current to the operating current. Without a ballast, excess current would flow, causing rapid destruction of the lamp. Some lamp types contain a small amount of neon, which permits striking at normal running voltage with no external ignition circuitry. Low-pressure sodium lamps operate this way. The simplest ballasts are just an inductor, and are chosen where cost is the deciding factor, such as street lighting. More advanced electronic ballasts may be designed to maintain constant light output over the life of the lamp, may drive the lamp with a square wave to maintain completely flicker-free output, and shut down in the event of certain faults. The most efficient source of electric light is the low-pressure sodium lamp. It produces, for all practical purposes, a monochromatic orange-yellow light, which gives a similarly monochromatic perception of any illuminated scene. For this reason, it is generally reserved for outdoor public lighting applications. Low-pressure sodium lights are favoured for public lighting by astronomers, since the light pollution that they generate can be easily filtered, contrary to broadband or continuous spectra. Many lamp units, or light bulbs, are specified in standardized shape codes and socket names. Incandescent bulbs and their retrofit replacements are often specified as "A19/A60 E26/E27", a common size for those kinds of light bulbs. In this example, the "A" parameters describe the bulb size and shape within the A-series light bulb while the "E" parameters describe the Edison screw base size and thread characteristics. Common comparison parameters include: Less common parameters include color rendering index (CRI). Life expectancy for many types of lamp is defined as the number of hours of operation at which 50% of them fail, that is the median life of the lamps. Production tolerances as low as 1% can create a variance of 25% in lamp life, so in general some lamps will fail well before the rated life expectancy, and some will last much longer. For LEDs, lamp life is defined as the operation time at which 50% of lamps have experienced a 70% decrease in light output. In the 1900s the Phoebus cartel formed in an attempt to reduce the life of electric light bulbs, an example of planned obsolescence. Some types of lamp are also sensitive to switching cycles. Rooms with frequent switching, such as bathrooms, can expect much shorter lamp life than what is printed on the box. Compact fluorescent lamps are particularly sensitive to switching cycles. The total amount of artificial light (especially from street light) is sufficient for cities to be easily visible at night from the air, and from space. External lighting grew at a rate of 3–6 percent for the later half of the 20th century and is the major source of light pollution that burdens astronomers and others with 80% of the world's population living in areas with night time light pollution. Light pollution has been shown to have a negative effect on some wildlife. Electric lamps can be used as heat sources, for example in incubators, as infrared lamps in fast food restaurants and toys such as the Kenner Easy-Bake Oven. Lamps can also be used for light therapy to deal with such issues as vitamin D deficiency, skin conditions such as acne and dermatitis, skin cancers, and seasonal affective disorder. Lamps which emit a specific frequency of blue light are also used to treat neonatal jaundice with the treatment which was initially undertaken in hospitals being able to be conducted at home. Electric lamps can also be used as a grow light to aid in plant growth especially in indoor hydroponics and aquatic plants with recent research into the most effective types of light for plant growth. Due to their nonlinear resistance characteristics, tungsten filament lamps have long been used as fast-acting thermistors in electronic circuits. Popular uses have included: In Western culture, a lightbulb — in particular, the appearance of an illuminated lightbulb above a person's head — signifies sudden inspiration. In the Middle East, a light bulb symbol has a sexual connotation. A stylized depiction of a light bulb features as the logo of the Turkish AK Party.
[ { "paragraph_id": 0, "text": "An electric light, lamp, or light bulb is an electrical component that produces light. It is the most common form of artificial lighting. Lamps usually have a base made of ceramic, metal, glass, or plastic, which secures the lamp in the socket of a light fixture, which is often called a \"lamp\" as well. The electrical connection to the socket may be made with a screw-thread base, two metal pins, two metal caps or a bayonet mount.", "title": "" }, { "paragraph_id": 1, "text": "The three main categories of electric lights are incandescent lamps, which produce light by a filament heated white-hot by electric current, gas-discharge lamps, which produce light by means of an electric arc through a gas, such as fluorescent lamps, and LED lamps, which produce light by a flow of electrons across a band gap in a semiconductor.", "title": "" }, { "paragraph_id": 2, "text": "The energy efficiency of electric lighting has increased radically since the first demonstration of arc lamps and the incandescent light bulb of the 19th century. Modern electric light sources come in a profusion of types and sizes adapted to many applications. Most modern electric lighting is powered by centrally generated electric power, but lighting may also be powered by mobile or standby electric generators or battery systems. Battery-powered light is often reserved for when and where stationary lights fail, often in the form of flashlights or electric lanterns, as well as in vehicles.", "title": "" }, { "paragraph_id": 3, "text": "Before electric lighting became common in the early 20th century, people used candles, gas lights, oil lamps, and fires. During 1799–1800, Alessandro Volta created the voltaic pile, the first electric battery. Current from these batteries could heat copper wire to incandescence. Vasily Vladimirovich Petrov developed the first persistent electric arc in 1802, and English chemist Humphry Davy gave a practical demonstration of an arc light in 1806.", "title": "History" }, { "paragraph_id": 4, "text": "In 1840, Warren de la Rue enclosed a platinum coil in a vacuum tube and passed an electric current through it, thus creating one of the world's first electric light bulbs. The design was based on the concept that the high melting point of platinum would allow it to operate at high temperatures and that the evacuated chamber would contain fewer gas molecules to react with the platinum, improving its longevity. Although it was an efficient design, the cost of the platinum made it impractical for commercial use.", "title": "History" }, { "paragraph_id": 5, "text": "The late 1870s and 1880s were marked by intense competition and innovation, with inventors like Joseph Swan in the UK and Thomas Edison in the US independently developing functional incandescent lamps. Swan's bulbs, based on designs by William Staite, were successful, but the filaments were too thick. Edison worked to create bulbs with thinner filaments, leading to a better design. The rivalry between Swan and Edison eventually led to a merger, forming the Edison and Swan Electric Light Company. By the early twentieth century these had completely replaced arc lamps.", "title": "History" }, { "paragraph_id": 6, "text": "While the ability of wires to illuminate when supplied with current was first discovered during the Enlightenment, it took more than a century of continuous and incremental improvement, including numerous designs, patents, and resulting intellectual property disputes, until incandescent light bulbs became commercially available in the 1920s. The first home to be lit by an electric light was Underhill, the home of Joseph Swan, around 1880.", "title": "History" }, { "paragraph_id": 7, "text": "The turn of the century saw further improvements in bulb longevity and efficiency, notably with the introduction of the tungsten filament by William D. Coolidge, who applied for a patent in 1912. This innovation became a standard for incandescent bulbs for many years.", "title": "History" }, { "paragraph_id": 8, "text": "In 1910, Georges Claude introduced the first neon light, paving the way for neon signs which would become ubiquitous in advertising.", "title": "History" }, { "paragraph_id": 9, "text": "In 1934, Arthur Compton, a renowned physicist and GE consultant, reported to the GE lamp department on successful experiments with fluorescent lighting at General Electric Co., Ltd. in Great Britain (unrelated to General Electric in the United States). Stimulated by this report, and with all of the key elements available, a team led by George E. Inman built a prototype fluorescent lamp in 1934 at General Electric’s Nela Park (Ohio) engineering laboratory. This was not a trivial exercise; as noted by Arthur A. Bright, \"A great deal of experimentation had to be done on lamp sizes and shapes, cathode construction, gas pressures of both argon and mercury vapor, colors of fluorescent powders, methods of attaching them to the inside of the tube, and other details of the lamp and its auxiliaries before the new device was ready for the public.\"", "title": "History" }, { "paragraph_id": 10, "text": "The first practical LED arrived in 1962.", "title": "History" }, { "paragraph_id": 11, "text": "In the United States, incandescent, halogen and compact fluorescent light bulbs will stop being sold effective as of August 2023, due to a ban by the U.S. Department of Energy. Compact fluorescent bulbs are included in the ban because of their toxic mercury that can be released into the home if broken and problems with disposal of mercury-containing bulbs in landfills.", "title": "History" }, { "paragraph_id": 12, "text": "In its modern form, the incandescent light bulb consists of a coiled filament of tungsten sealed in a globular glass chamber, either a vacuum or full of an inert gas such as argon. When an electric current is connected, the tungsten is heated to 2,000 to 3,300 K (1,730 to 3,030 °C; 3,140 to 5,480 °F) and glows, emitting light that approximates a continuous spectrum.", "title": "Types" }, { "paragraph_id": 13, "text": "Incandescent bulbs are highly inefficient, in that just 2–5% of the energy consumed is emitted as visible, usable light. The remaining 95% is lost as heat. In warmer climates, the emitted heat must then be removed, putting additional pressure on ventilation or air conditioning systems. In colder weather, the heat byproduct has some value, and has been successfully harnessed for warming in devices such as heat lamps. Incandescent bulbs are nonetheless being phased out in favor of technologies like CFLs and LED bulbs in many countries due to their low energy efficiency. The European Commission estimated in 2012 that a complete ban on incandescent bulbs would contribute 5 to 10 billion euros to the economy and save 15 billion metric tonnes of carbon dioxide emissions.", "title": "Types" }, { "paragraph_id": 14, "text": "Halogen lamps are usually much smaller than standard incandescent lamps, because for successful operation a bulb temperature over 200 °C is generally necessary. For this reason, most have a bulb of fused silica (quartz) or aluminosilicate glass. This is often sealed inside an additional layer of glass. The outer glass is a safety precaution, to reduce ultraviolet emission and to contain hot glass shards should the inner envelope explode during operation. Oily residue from fingerprints may cause a hot quartz envelope to shatter due to excessive heat buildup at the contamination site. The risk of burns or fire is also greater with bare bulbs, leading to their prohibition in some places, unless enclosed by the luminaire.", "title": "Types" }, { "paragraph_id": 15, "text": "Those designed for 12- or 24-volt operation have compact filaments, useful for good optical control. Also, they have higher efficacies (lumens per watt) and longer lives than non-halogen types. The light output remains almost constant throughout their life.", "title": "Types" }, { "paragraph_id": 16, "text": "Fluorescent lamps consist of a glass tube that contains mercury vapour or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet photons. They have much higher efficiency than incandescent lamps. For the same amount of light generated, they typically use around one-quarter to one-third the power of an incandescent. The typical luminous efficacy of fluorescent lighting systems is 50–100 lumens per watt, several times the efficacy of incandescent bulbs with comparable light output. Fluorescent lamp fixtures are more costly than incandescent lamps, because they require a ballast to regulate the current through the lamp, but the lower energy cost typically offsets the higher initial cost. Compact fluorescent lamps are available in the same popular sizes as incandescent lamps and are used as an energy-saving alternative in homes. Because they contain mercury, many fluorescent lamps are classified as hazardous waste. The United States Environmental Protection Agency recommends that fluorescent lamps be segregated from general waste for recycling or safe disposal, and some jurisdictions require recycling of them.", "title": "Types" }, { "paragraph_id": 17, "text": "The solid-state light-emitting diode (LED) has been popular as an indicator light in consumer electronics and professional audio gear since the 1970s. In the 2000s, efficacy and output have risen to the point where LEDs are now being used in lighting applications such as car headlights and brake lights, in flashlights and bicycle lights, as well as in decorative applications, such as holiday lighting. Indicator LEDs are known for their extremely long life, up to 100,000 hours, but lighting LEDs are operated much less conservatively, and consequently have shorter lives. LED technology is useful for lighting designers, because of its low power consumption, low heat generation, instantaneous on/off control, and in the case of single color LEDs, continuity of color throughout the life of the diode and relatively low cost of manufacture. LED lifetime depends strongly on the temperature of the diode. Operating an LED lamp in conditions that increase the internal temperature can greatly shorten the lamp's life.", "title": "Types" }, { "paragraph_id": 18, "text": "Carbon arc lamps consist of two carbon rod electrodes in open air, supplied by a current-limiting ballast. The electric arc is struck by touching the rod tips then separating them. The ensuing arc produces a white-hot plasma between the rod tips. These lamps have higher efficacy than filament lamps, but the carbon rods are short-lived and require constant adjustment in use, as the intense heat of the arc erodes them. The lamps produce significant ultraviolet output, they require ventilation when used indoors, and due to their intensity they need protection from direct sight.", "title": "Types" }, { "paragraph_id": 19, "text": "Invented by Humphry Davy around 1805, the carbon arc was the first practical electric light. It was used commercially beginning in the 1870s for large building and street lighting until it was superseded in the early 20th century by the incandescent light. Carbon arc lamps operate at high power and produce high intensity white light. They also are a point source of light. They remained in use in limited applications that required these properties, such as movie projectors, stage lighting, and searchlights, until after World War II.", "title": "Types" }, { "paragraph_id": 20, "text": "A discharge lamp has a glass or silica envelope containing two metal electrodes separated by a gas. Gases used include, neon, argon, xenon, sodium, metal halides, and mercury. The core operating principle is much the same as the carbon arc lamp, but the term \"arc lamp\" normally refers to carbon arc lamps, with more modern types of gas discharge lamp normally called discharge lamps. With some discharge lamps, very high voltage is used to strike the arc. This requires an electrical circuit called an igniter, which is part of the electrical ballast circuitry. After the arc is struck, the internal resistance of the lamp drops to a low level, and the ballast limits the current to the operating current. Without a ballast, excess current would flow, causing rapid destruction of the lamp.", "title": "Types" }, { "paragraph_id": 21, "text": "Some lamp types contain a small amount of neon, which permits striking at normal running voltage with no external ignition circuitry. Low-pressure sodium lamps operate this way. The simplest ballasts are just an inductor, and are chosen where cost is the deciding factor, such as street lighting. More advanced electronic ballasts may be designed to maintain constant light output over the life of the lamp, may drive the lamp with a square wave to maintain completely flicker-free output, and shut down in the event of certain faults.", "title": "Types" }, { "paragraph_id": 22, "text": "The most efficient source of electric light is the low-pressure sodium lamp. It produces, for all practical purposes, a monochromatic orange-yellow light, which gives a similarly monochromatic perception of any illuminated scene. For this reason, it is generally reserved for outdoor public lighting applications. Low-pressure sodium lights are favoured for public lighting by astronomers, since the light pollution that they generate can be easily filtered, contrary to broadband or continuous spectra.", "title": "Types" }, { "paragraph_id": 23, "text": "Many lamp units, or light bulbs, are specified in standardized shape codes and socket names. Incandescent bulbs and their retrofit replacements are often specified as \"A19/A60 E26/E27\", a common size for those kinds of light bulbs. In this example, the \"A\" parameters describe the bulb size and shape within the A-series light bulb while the \"E\" parameters describe the Edison screw base size and thread characteristics.", "title": "Characteristics" }, { "paragraph_id": 24, "text": "Common comparison parameters include:", "title": "Characteristics" }, { "paragraph_id": 25, "text": "Less common parameters include color rendering index (CRI).", "title": "Characteristics" }, { "paragraph_id": 26, "text": "Life expectancy for many types of lamp is defined as the number of hours of operation at which 50% of them fail, that is the median life of the lamps. Production tolerances as low as 1% can create a variance of 25% in lamp life, so in general some lamps will fail well before the rated life expectancy, and some will last much longer. For LEDs, lamp life is defined as the operation time at which 50% of lamps have experienced a 70% decrease in light output. In the 1900s the Phoebus cartel formed in an attempt to reduce the life of electric light bulbs, an example of planned obsolescence.", "title": "Characteristics" }, { "paragraph_id": 27, "text": "Some types of lamp are also sensitive to switching cycles. Rooms with frequent switching, such as bathrooms, can expect much shorter lamp life than what is printed on the box. Compact fluorescent lamps are particularly sensitive to switching cycles.", "title": "Characteristics" }, { "paragraph_id": 28, "text": "The total amount of artificial light (especially from street light) is sufficient for cities to be easily visible at night from the air, and from space. External lighting grew at a rate of 3–6 percent for the later half of the 20th century and is the major source of light pollution that burdens astronomers and others with 80% of the world's population living in areas with night time light pollution. Light pollution has been shown to have a negative effect on some wildlife.", "title": "Uses" }, { "paragraph_id": 29, "text": "Electric lamps can be used as heat sources, for example in incubators, as infrared lamps in fast food restaurants and toys such as the Kenner Easy-Bake Oven.", "title": "Uses" }, { "paragraph_id": 30, "text": "Lamps can also be used for light therapy to deal with such issues as vitamin D deficiency, skin conditions such as acne and dermatitis, skin cancers, and seasonal affective disorder. Lamps which emit a specific frequency of blue light are also used to treat neonatal jaundice with the treatment which was initially undertaken in hospitals being able to be conducted at home.", "title": "Uses" }, { "paragraph_id": 31, "text": "Electric lamps can also be used as a grow light to aid in plant growth especially in indoor hydroponics and aquatic plants with recent research into the most effective types of light for plant growth.", "title": "Uses" }, { "paragraph_id": 32, "text": "Due to their nonlinear resistance characteristics, tungsten filament lamps have long been used as fast-acting thermistors in electronic circuits. Popular uses have included:", "title": "Uses" }, { "paragraph_id": 33, "text": "In Western culture, a lightbulb — in particular, the appearance of an illuminated lightbulb above a person's head — signifies sudden inspiration.", "title": "Cultural symbolism" }, { "paragraph_id": 34, "text": "In the Middle East, a light bulb symbol has a sexual connotation.", "title": "Cultural symbolism" }, { "paragraph_id": 35, "text": "A stylized depiction of a light bulb features as the logo of the Turkish AK Party.", "title": "Cultural symbolism" } ]
An electric light, lamp, or light bulb is an electrical component that produces light. It is the most common form of artificial lighting. Lamps usually have a base made of ceramic, metal, glass, or plastic, which secures the lamp in the socket of a light fixture, which is often called a "lamp" as well. The electrical connection to the socket may be made with a screw-thread base, two metal pins, two metal caps or a bayonet mount. The three main categories of electric lights are incandescent lamps, which produce light by a filament heated white-hot by electric current, gas-discharge lamps, which produce light by means of an electric arc through a gas, such as fluorescent lamps, and LED lamps, which produce light by a flow of electrons across a band gap in a semiconductor. The energy efficiency of electric lighting has increased radically since the first demonstration of arc lamps and the incandescent light bulb of the 19th century. Modern electric light sources come in a profusion of types and sizes adapted to many applications. Most modern electric lighting is powered by centrally generated electric power, but lighting may also be powered by mobile or standby electric generators or battery systems. Battery-powered light is often reserved for when and where stationary lights fail, often in the form of flashlights or electric lanterns, as well as in vehicles.
2002-02-25T15:51:15Z
2023-12-25T03:57:58Z
[ "Template:Artificial light sources", "Template:Main", "Template:Reflist", "Template:Short description", "Template:Cite web", "Template:Cite book", "Template:Cite patent", "Template:Authority control", "Template:Infobox electronic component", "Template:Update after", "Template:Cite journal", "Template:Cite news", "Template:Hatgrp", "Template:Pp-pc1" ]
https://en.wikipedia.org/wiki/Electric_light
9,657
Edgar Rice Burroughs
Edgar Rice Burroughs (September 1, 1875 – March 19, 1950) was an American writer, best known for his prolific output in the adventure, science fiction, and fantasy genres. Best known for creating the characters Tarzan and John Carter, he also wrote the Pellucidar series, the Amtor series, and the Caspak trilogy. Tarzan was immediately popular, and Burroughs capitalized on it in every possible way, including a syndicated Tarzan comic strip, films, and merchandise. Tarzan remains one of the most successful fictional characters to this day and is a cultural icon. Burroughs's California ranch is now the center of the Tarzana neighborhood in Los Angeles, named after the character. Burroughs was born on September 1, 1875, in Chicago (he later lived for many years in the suburb of Oak Park), the fourth son of Major George Tyler Burroughs (1833–1913), a businessman and Civil War veteran, and his wife, Mary Evaline (Zieger) Burroughs (1840–1920). His middle name is from his paternal grandmother, Mary Coleman Rice Burroughs (1802–1889). Burroughs was of almost entirely English ancestry, with a family line that had been in North America since the Colonial era. Through his Rice grandmother, Burroughs was descended from settler Edmund Rice, one of the English Puritans who moved to Massachusetts Bay Colony in the early 17th century. He once remarked: "I can trace my ancestry back to Deacon Edmund Rice." The Burroughs side of the family was also of English origin, having emigrated to Massachusetts around the same time. Many of his ancestors fought in the American Revolution. Some of his ancestors settled in Virginia during the colonial period, and Burroughs often emphasized his connection with that side of his family, seeing it as romantic and warlike. As close cousins he had seven signatories of the U.S. Declaration of Independence, including his third cousin, four times removed, 2nd President of the United States John Adams. Burroughs was educated at a number of local schools. He then attended Phillips Academy, in Andover, Massachusetts, and then the Michigan Military Academy. Graduating in 1895, but failing the entrance exam for the United States Military Academy at West Point, he instead became an enlisted soldier with the 7th U.S. Cavalry in Fort Grant, Arizona Territory. After being diagnosed with a heart problem and thus ineligible to serve, he was discharged in 1897. After his discharge, Burroughs worked at a number of different jobs. During the Chicago influenza epidemic of 1891, he spent half a year at his brother's ranch on the Raft River in Idaho, as a cowboy, drifted somewhat afterward, then worked at his father's Chicago battery factory in 1899, marrying his childhood sweetheart, Emma Hulbert (1876–1944), in January 1900. In 1903, Burroughs joined his brothers, Yale graduates George and Harry, who were, by then, prominent Pocatello area ranchers in southern Idaho, and partners in the Sweetser-Burroughs Mining Company, where he took on managing their ill-fated Snake River gold dredge, a classic bucket-line dredge. The Burroughs brothers were also the sixth cousins, once removed, of famed miner Kate Rice who, in 1914, became the first female prospector in the Canadian North. Journalist and publisher C. Allen Thorndike Rice was also his third cousin. When the new mine proved unsuccessful, the brothers secured for Burroughs a position with the Oregon Short Line Railroad in Salt Lake City. Burroughs resigned from the railroad in October 1904. By 1911, around age 36, after seven years of low wages as a pencil-sharpener wholesaler, Burroughs began to write fiction. By this time, Emma and he had two children, Joan (1908–1972), and Hulbert (1909–1991). During this period, he had copious spare time and began reading pulp-fiction magazines. In 1929, he recalled thinking that: "[...] if people were paid for writing rot such as I read in some of those magazines, that I could write stories just as rotten. As a matter of fact, although I had never written a story, I knew absolutely that I could write stories just as entertaining and probably a whole lot more so than any I chanced to read in those magazines." In 1913, Burroughs and Emma had their third and last child, John Coleman Burroughs (1913–1979), later known for his illustrations of his father's books. In the 1920s, Burroughs became a pilot, purchased a Security Airster S-1, and encouraged his family to learn to fly. Daughter Joan married Tarzan film actor James Pierce. She starred with her husband as the voice of Jane, during 1932–1934 for the Tarzan radio series. The pair were married for more than forty years, separated only by her death in 1972. Burroughs divorced Emma in 1934, and, in 1935, married the former actress Florence Gilbert Dearholt, who was the former wife of his friend (who was then himself remarrying), Ashton Dearholt, with whom he had co-founded Burroughs-Tarzan Enterprises while filming The New Adventures of Tarzan. Burroughs adopted the Dearholts' two children. He and Florence divorced in 1942. Burroughs was in his late 60s and was in Honolulu at the time of the Japanese attack on Pearl Harbor. Despite his age, he applied for and received permission to become a war correspondent, becoming one of the oldest U.S. war correspondents during World War II. This period of his life is mentioned in William Brinkley's bestselling novel Don't Go Near the Water. After the war ended, Burroughs moved back to Encino, California, where after many health problems, he died of a heart attack on March 19, 1950, having written almost 80 novels. He is buried in Tarzana, California, US. At the time of his death he was believed to have been the writer who had made the most from films, earning over US$2 million in royalties from 27 Tarzan pictures. The Science Fiction Hall of Fame inducted Burroughs in 2003. Aiming his work at the pulps—under the name "Norman Bean" to protect his reputation—Burroughs had his first story, Under the Moons of Mars, serialized by Frank Munsey in the February to July 1912 issues of The All-Story. Under the Moons of Mars inaugurated the Barsoom series, introduced John Carter, and earned Burroughs US$400 ($11,922 today). It was first published as a book by A. C. McClurg of Chicago in 1917, entitled A Princess of Mars, after three Barsoom sequels had appeared as serials and McClurg had published the first four serial Tarzan novels as books. Burroughs soon took up writing full-time, and by the time the run of Under the Moons of Mars had finished, he had completed two novels, including Tarzan of the Apes, published from October 1912 and one of his most successful series. Burroughs also wrote popular science fiction and fantasy stories involving adventurers from Earth transported to various planets (notably Barsoom, Burroughs's fictional name for Mars, and Amtor, his fictional name for Venus), lost islands (Caspak), and into the interior of the Hollow Earth in his Pellucidar stories. He also wrote Westerns and historical romances. Besides those published in All-Story, many of his stories were published in The Argosy magazine. Tarzan was a cultural sensation when introduced. Burroughs was determined to capitalize on Tarzan's popularity in every way possible. He planned to exploit Tarzan through several different media including a syndicated Tarzan comic strip, movies, and merchandise. Experts in the field advised against this course of action, stating that the different media would just end up competing against each other. Burroughs went ahead, however, and proved the experts wrong – the public wanted Tarzan in whatever fashion he was offered. Tarzan remains one of the most successful fictional characters to this day and is a cultural icon. In either 1915 or 1919, Burroughs purchased a large ranch north of Los Angeles, California, which he named "Tarzana". The citizens of the community that sprang up around the ranch voted to adopt that name when their community, Tarzana, California, was formed in 1927. Also, the unincorporated community of Tarzan, Texas, was formally named in 1927 when the US Postal Service accepted the name, reputedly coming from the popularity of the first (silent) Tarzan of the Apes film, starring Elmo Lincoln, and an early "Tarzan" comic strip. In 1923, Burroughs set up his own company, Edgar Rice Burroughs, Inc., and began printing his own books through the 1930s. Because of the part Burroughs's science fiction played in inspiring real exploration of Mars, an impact crater on Mars was named in his honor after his death. In a Paris Review interview, Ray Bradbury said of Burroughs: "Edgar Rice Burroughs never would have looked upon himself as a social mover and shaker with social obligations. But as it turns out – and I love to say it because it upsets everyone terribly – Burroughs is probably the most influential writer in the entire history of the world. By giving romance and adventure to a whole generation of boys, Burroughs caused them to go out and decide to become special." In Something of Myself (published posthumously in 1937) Rudyard Kipling wrote: "My Jungle Books begat Zoos of [imitators]. But the genius of all the genii was one who wrote a series called Tarzan of the Apes. I read it, but regret I never saw it on the films, where it rages most successfully. He had 'jazzed' the motif of the Jungle Books and, I imagine, had thoroughly enjoyed himself. He was reported to have said that he wanted to find out how bad a book he could write and 'get away with', which is a legitimate ambition." By 1963, Floyd C. Gale of Galaxy Science Fiction wrote when discussing reprints of several Burroughs novels by Ace Books, "an entire generation has grown up inexplicably Burroughs-less". He stated that most of the author's books had been out of print for years and that only the "occasional laughable Tarzan film" reminded public of his fiction. Gale reported his surprise that after two decades his books were again available, with Canaveral Press, Dover Publications, and Ballantine Books also reprinting them. Few critical books have been written about Burroughs. From an academic standpoint, the most helpful are Erling Holtsmark's two books: Tarzan and Tradition and Edgar Rice Burroughs; Stan Galloway's The Teenage Tarzan: A Literary Analysis of Edgar Rice Burroughs' Jungle Tales of Tarzan; and Richard Lupoff's two books: Master of Adventure: Edgar Rice Burroughs and Barsoom: Edgar Rice Burroughs and the Martian Vision. Galloway was identified by James Edwin Gunn as "one of the half-dozen finest Burroughs scholars in the world"; Galloway called Holtsmark his "most important predecessor". Burroughs strongly supported eugenics and scientific racism. His views held that English nobles made up a particular heritable elite among Anglo-Saxons. Tarzan was meant to reflect this, with him being born to English nobles and then adopted by talking apes (the Mangani). They express eugenicist views themselves, but Tarzan is permitted to live despite being deemed "unfit" in comparison, and grows up to surpass not only them but black Africans, whom Burroughs clearly presents as inherently inferior, even not wholly human. In one Tarzan story, he finds an ancient civilization where eugenics has been practiced for over 2,000 years, with the result that it is free of all crime. Criminal behavior is held to be entirely hereditary, with the solution having been to kill not only criminals but also their families. Lost on Venus, a later novel, presents a similar utopia where forced sterilization is practiced and the "unfit" are killed. Burroughs explicitly supported such ideas in his unpublished nonfiction essay I See A New Race. Additionally, his Pirate Blood, which is not speculative fiction and remained unpublished after his death, portrayed the characters as victims of their hereditary criminal traits (one a descendant of the corsair Jean Lafitte, another from the Jukes family). These views have been compared with Nazi eugenics (though noting that they were popular and common at the time), with his Lost on Venus being released the same year the Nazis took power (in 1933). In 2003, Burroughs was inducted into the Science Fiction and Fantasy Hall of Fame. These three texts have been published by various houses in one or two volumes. Adding to the confusion, some editions have the original (significantly longer) introduction to Part I from the first publication as a magazine serial, and others have the shorter version from the first book publication, which included all three parts under the title The Moon Maid.
[ { "paragraph_id": 0, "text": "Edgar Rice Burroughs (September 1, 1875 – March 19, 1950) was an American writer, best known for his prolific output in the adventure, science fiction, and fantasy genres. Best known for creating the characters Tarzan and John Carter, he also wrote the Pellucidar series, the Amtor series, and the Caspak trilogy.", "title": "" }, { "paragraph_id": 1, "text": "Tarzan was immediately popular, and Burroughs capitalized on it in every possible way, including a syndicated Tarzan comic strip, films, and merchandise. Tarzan remains one of the most successful fictional characters to this day and is a cultural icon. Burroughs's California ranch is now the center of the Tarzana neighborhood in Los Angeles, named after the character.", "title": "" }, { "paragraph_id": 2, "text": "Burroughs was born on September 1, 1875, in Chicago (he later lived for many years in the suburb of Oak Park), the fourth son of Major George Tyler Burroughs (1833–1913), a businessman and Civil War veteran, and his wife, Mary Evaline (Zieger) Burroughs (1840–1920). His middle name is from his paternal grandmother, Mary Coleman Rice Burroughs (1802–1889).", "title": "Biography" }, { "paragraph_id": 3, "text": "Burroughs was of almost entirely English ancestry, with a family line that had been in North America since the Colonial era.", "title": "Biography" }, { "paragraph_id": 4, "text": "Through his Rice grandmother, Burroughs was descended from settler Edmund Rice, one of the English Puritans who moved to Massachusetts Bay Colony in the early 17th century. He once remarked: \"I can trace my ancestry back to Deacon Edmund Rice.\"", "title": "Biography" }, { "paragraph_id": 5, "text": "The Burroughs side of the family was also of English origin, having emigrated to Massachusetts around the same time. Many of his ancestors fought in the American Revolution. Some of his ancestors settled in Virginia during the colonial period, and Burroughs often emphasized his connection with that side of his family, seeing it as romantic and warlike. As close cousins he had seven signatories of the U.S. Declaration of Independence, including his third cousin, four times removed, 2nd President of the United States John Adams.", "title": "Biography" }, { "paragraph_id": 6, "text": "Burroughs was educated at a number of local schools. He then attended Phillips Academy, in Andover, Massachusetts, and then the Michigan Military Academy. Graduating in 1895, but failing the entrance exam for the United States Military Academy at West Point, he instead became an enlisted soldier with the 7th U.S. Cavalry in Fort Grant, Arizona Territory. After being diagnosed with a heart problem and thus ineligible to serve, he was discharged in 1897.", "title": "Biography" }, { "paragraph_id": 7, "text": "After his discharge, Burroughs worked at a number of different jobs. During the Chicago influenza epidemic of 1891, he spent half a year at his brother's ranch on the Raft River in Idaho, as a cowboy, drifted somewhat afterward, then worked at his father's Chicago battery factory in 1899, marrying his childhood sweetheart, Emma Hulbert (1876–1944), in January 1900.", "title": "Biography" }, { "paragraph_id": 8, "text": "In 1903, Burroughs joined his brothers, Yale graduates George and Harry, who were, by then, prominent Pocatello area ranchers in southern Idaho, and partners in the Sweetser-Burroughs Mining Company, where he took on managing their ill-fated Snake River gold dredge, a classic bucket-line dredge. The Burroughs brothers were also the sixth cousins, once removed, of famed miner Kate Rice who, in 1914, became the first female prospector in the Canadian North. Journalist and publisher C. Allen Thorndike Rice was also his third cousin.", "title": "Biography" }, { "paragraph_id": 9, "text": "When the new mine proved unsuccessful, the brothers secured for Burroughs a position with the Oregon Short Line Railroad in Salt Lake City. Burroughs resigned from the railroad in October 1904.", "title": "Biography" }, { "paragraph_id": 10, "text": "By 1911, around age 36, after seven years of low wages as a pencil-sharpener wholesaler, Burroughs began to write fiction. By this time, Emma and he had two children, Joan (1908–1972), and Hulbert (1909–1991). During this period, he had copious spare time and began reading pulp-fiction magazines. In 1929, he recalled thinking that:", "title": "Biography" }, { "paragraph_id": 11, "text": "\"[...] if people were paid for writing rot such as I read in some of those magazines, that I could write stories just as rotten. As a matter of fact, although I had never written a story, I knew absolutely that I could write stories just as entertaining and probably a whole lot more so than any I chanced to read in those magazines.\"", "title": "Biography" }, { "paragraph_id": 12, "text": "In 1913, Burroughs and Emma had their third and last child, John Coleman Burroughs (1913–1979), later known for his illustrations of his father's books.", "title": "Biography" }, { "paragraph_id": 13, "text": "In the 1920s, Burroughs became a pilot, purchased a Security Airster S-1, and encouraged his family to learn to fly.", "title": "Biography" }, { "paragraph_id": 14, "text": "Daughter Joan married Tarzan film actor James Pierce. She starred with her husband as the voice of Jane, during 1932–1934 for the Tarzan radio series. The pair were married for more than forty years, separated only by her death in 1972.", "title": "Biography" }, { "paragraph_id": 15, "text": "Burroughs divorced Emma in 1934, and, in 1935, married the former actress Florence Gilbert Dearholt, who was the former wife of his friend (who was then himself remarrying), Ashton Dearholt, with whom he had co-founded Burroughs-Tarzan Enterprises while filming The New Adventures of Tarzan. Burroughs adopted the Dearholts' two children. He and Florence divorced in 1942.", "title": "Biography" }, { "paragraph_id": 16, "text": "Burroughs was in his late 60s and was in Honolulu at the time of the Japanese attack on Pearl Harbor. Despite his age, he applied for and received permission to become a war correspondent, becoming one of the oldest U.S. war correspondents during World War II. This period of his life is mentioned in William Brinkley's bestselling novel Don't Go Near the Water.", "title": "Biography" }, { "paragraph_id": 17, "text": "After the war ended, Burroughs moved back to Encino, California, where after many health problems, he died of a heart attack on March 19, 1950, having written almost 80 novels. He is buried in Tarzana, California, US.", "title": "Biography" }, { "paragraph_id": 18, "text": "At the time of his death he was believed to have been the writer who had made the most from films, earning over US$2 million in royalties from 27 Tarzan pictures.", "title": "Biography" }, { "paragraph_id": 19, "text": "The Science Fiction Hall of Fame inducted Burroughs in 2003.", "title": "Biography" }, { "paragraph_id": 20, "text": "Aiming his work at the pulps—under the name \"Norman Bean\" to protect his reputation—Burroughs had his first story, Under the Moons of Mars, serialized by Frank Munsey in the February to July 1912 issues of The All-Story. Under the Moons of Mars inaugurated the Barsoom series, introduced John Carter, and earned Burroughs US$400 ($11,922 today). It was first published as a book by A. C. McClurg of Chicago in 1917, entitled A Princess of Mars, after three Barsoom sequels had appeared as serials and McClurg had published the first four serial Tarzan novels as books.", "title": "Literary career" }, { "paragraph_id": 21, "text": "Burroughs soon took up writing full-time, and by the time the run of Under the Moons of Mars had finished, he had completed two novels, including Tarzan of the Apes, published from October 1912 and one of his most successful series.", "title": "Literary career" }, { "paragraph_id": 22, "text": "Burroughs also wrote popular science fiction and fantasy stories involving adventurers from Earth transported to various planets (notably Barsoom, Burroughs's fictional name for Mars, and Amtor, his fictional name for Venus), lost islands (Caspak), and into the interior of the Hollow Earth in his Pellucidar stories. He also wrote Westerns and historical romances. Besides those published in All-Story, many of his stories were published in The Argosy magazine.", "title": "Literary career" }, { "paragraph_id": 23, "text": "Tarzan was a cultural sensation when introduced. Burroughs was determined to capitalize on Tarzan's popularity in every way possible. He planned to exploit Tarzan through several different media including a syndicated Tarzan comic strip, movies, and merchandise. Experts in the field advised against this course of action, stating that the different media would just end up competing against each other. Burroughs went ahead, however, and proved the experts wrong – the public wanted Tarzan in whatever fashion he was offered. Tarzan remains one of the most successful fictional characters to this day and is a cultural icon.", "title": "Literary career" }, { "paragraph_id": 24, "text": "In either 1915 or 1919, Burroughs purchased a large ranch north of Los Angeles, California, which he named \"Tarzana\". The citizens of the community that sprang up around the ranch voted to adopt that name when their community, Tarzana, California, was formed in 1927. Also, the unincorporated community of Tarzan, Texas, was formally named in 1927 when the US Postal Service accepted the name, reputedly coming from the popularity of the first (silent) Tarzan of the Apes film, starring Elmo Lincoln, and an early \"Tarzan\" comic strip.", "title": "Literary career" }, { "paragraph_id": 25, "text": "In 1923, Burroughs set up his own company, Edgar Rice Burroughs, Inc., and began printing his own books through the 1930s.", "title": "Literary career" }, { "paragraph_id": 26, "text": "Because of the part Burroughs's science fiction played in inspiring real exploration of Mars, an impact crater on Mars was named in his honor after his death. In a Paris Review interview, Ray Bradbury said of Burroughs:", "title": "Reception and criticism" }, { "paragraph_id": 27, "text": "\"Edgar Rice Burroughs never would have looked upon himself as a social mover and shaker with social obligations. But as it turns out – and I love to say it because it upsets everyone terribly – Burroughs is probably the most influential writer in the entire history of the world. By giving romance and adventure to a whole generation of boys, Burroughs caused them to go out and decide to become special.\"", "title": "Reception and criticism" }, { "paragraph_id": 28, "text": "In Something of Myself (published posthumously in 1937) Rudyard Kipling wrote: \"My Jungle Books begat Zoos of [imitators]. But the genius of all the genii was one who wrote a series called Tarzan of the Apes. I read it, but regret I never saw it on the films, where it rages most successfully. He had 'jazzed' the motif of the Jungle Books and, I imagine, had thoroughly enjoyed himself. He was reported to have said that he wanted to find out how bad a book he could write and 'get away with', which is a legitimate ambition.\"", "title": "Reception and criticism" }, { "paragraph_id": 29, "text": "By 1963, Floyd C. Gale of Galaxy Science Fiction wrote when discussing reprints of several Burroughs novels by Ace Books, \"an entire generation has grown up inexplicably Burroughs-less\". He stated that most of the author's books had been out of print for years and that only the \"occasional laughable Tarzan film\" reminded public of his fiction. Gale reported his surprise that after two decades his books were again available, with Canaveral Press, Dover Publications, and Ballantine Books also reprinting them.", "title": "Reception and criticism" }, { "paragraph_id": 30, "text": "Few critical books have been written about Burroughs. From an academic standpoint, the most helpful are Erling Holtsmark's two books: Tarzan and Tradition and Edgar Rice Burroughs; Stan Galloway's The Teenage Tarzan: A Literary Analysis of Edgar Rice Burroughs' Jungle Tales of Tarzan; and Richard Lupoff's two books: Master of Adventure: Edgar Rice Burroughs and Barsoom: Edgar Rice Burroughs and the Martian Vision. Galloway was identified by James Edwin Gunn as \"one of the half-dozen finest Burroughs scholars in the world\"; Galloway called Holtsmark his \"most important predecessor\".", "title": "Reception and criticism" }, { "paragraph_id": 31, "text": "Burroughs strongly supported eugenics and scientific racism. His views held that English nobles made up a particular heritable elite among Anglo-Saxons. Tarzan was meant to reflect this, with him being born to English nobles and then adopted by talking apes (the Mangani). They express eugenicist views themselves, but Tarzan is permitted to live despite being deemed \"unfit\" in comparison, and grows up to surpass not only them but black Africans, whom Burroughs clearly presents as inherently inferior, even not wholly human. In one Tarzan story, he finds an ancient civilization where eugenics has been practiced for over 2,000 years, with the result that it is free of all crime. Criminal behavior is held to be entirely hereditary, with the solution having been to kill not only criminals but also their families. Lost on Venus, a later novel, presents a similar utopia where forced sterilization is practiced and the \"unfit\" are killed. Burroughs explicitly supported such ideas in his unpublished nonfiction essay I See A New Race. Additionally, his Pirate Blood, which is not speculative fiction and remained unpublished after his death, portrayed the characters as victims of their hereditary criminal traits (one a descendant of the corsair Jean Lafitte, another from the Jukes family). These views have been compared with Nazi eugenics (though noting that they were popular and common at the time), with his Lost on Venus being released the same year the Nazis took power (in 1933).", "title": "Reception and criticism" }, { "paragraph_id": 32, "text": "In 2003, Burroughs was inducted into the Science Fiction and Fantasy Hall of Fame.", "title": "Reception and criticism" }, { "paragraph_id": 33, "text": "These three texts have been published by various houses in one or two volumes. Adding to the confusion, some editions have the original (significantly longer) introduction to Part I from the first publication as a magazine serial, and others have the shorter version from the first book publication, which included all three parts under the title The Moon Maid.", "title": "Selected works" } ]
Edgar Rice Burroughs was an American writer, best known for his prolific output in the adventure, science fiction, and fantasy genres. Best known for creating the characters Tarzan and John Carter, he also wrote the Pellucidar series, the Amtor series, and the Caspak trilogy. Tarzan was immediately popular, and Burroughs capitalized on it in every possible way, including a syndicated Tarzan comic strip, films, and merchandise. Tarzan remains one of the most successful fictional characters to this day and is a cultural icon. Burroughs's California ranch is now the center of the Tarzana neighborhood in Los Angeles, named after the character.
2001-09-29T17:24:44Z
2023-12-26T20:02:02Z
[ "Template:Dead link", "Template:FadedPage", "Template:Burroughs (books)", "Template:Tarzan", "Template:Webarchive", "Template:Barsoom", "Template:Quote", "Template:Sfhof", "Template:Inkpot Award 1970s", "Template:Authority control", "Template:Sister project links", "Template:Isfdb name", "Template:Sfn", "Template:US$", "Template:Citation", "Template:Cite magazine", "Template:Efn", "Template:Reflist", "Template:StandardEbooks", "Template:Internet Archive author", "Template:Short description", "Template:Use mdy dates", "Template:Infobox writer", "Template:Fact", "Template:Librivox author", "Template:Caspak trilogy", "Template:Cite book", "Template:Cite news", "Template:LCAuth", "Template:Official website", "Template:Portal", "Template:Notelist", "Template:Cite journal", "Template:Gutenberg author", "Template:Stack", "Template:Main", "Template:Cite web", "Template:Harvnb" ]
https://en.wikipedia.org/wiki/Edgar_Rice_Burroughs
9,658
Eugène Viollet-le-Duc
Eugène Emmanuel Viollet-le-Duc (French: [øʒɛn vjɔlɛ lə dyk]; 27 January 1814 – 17 September 1879) was a French architect and author, famous for his restoration of the most prominent medieval landmarks in France. His major restoration projects included Notre-Dame de Paris, the Basilica of Saint Denis, Mont Saint-Michel, Sainte-Chapelle, the medieval walls of the city of Carcassonne, and Roquetaillade castle in the Bordeaux region. His writings on decoration and on the relationship between form and function in architecture had a fundamental influence on a whole new generation of architects, including all the major Art Nouveau artists: Antoni Gaudí, Victor Horta, Hector Guimard, Henry Van de Velde, Henry Sauvage and the Ecole de Nancy, Paul Hankar, Otto Wagner, Eugene Grasset, Emile Gallé and Hendrik Petrus Berlage. He also influences the first modern architects, Frank Lloyd Wright, Mies van der Rohe, Auguste Perret, Louis Sullivan and Le Corbusier, who considered Viollet-le-Duc as the father of modern architecture: "The roots of modern architecture are to be found in Viollet-le-Duc". His writings also influenced John Ruskin, William Morris and the Arts and Crafts movement. And at the 1862 international exhibition in London the esthetic works of Burne-Jones, Rossetti, Philip Webb, William Morris, Simeon Solomon et Edward Poynter are directly influenced from drawings in Viollet-le-Duc's Dictionnary. The English architect William Burges admitted in his late life "We all cribbed on Viollet-le-Duc even though no one could read French." Viollet-le-Duc was born in Paris in 1814. His grandfather was an architect, and his father was a high-ranking civil servant, who in 1816 became the overseer of the royal residences of Louis XVIII. His uncle Étienne-Jean Delécluze was a painter, a former student of Jacques-Louis David, an art critic and hosted a literary salon, which was attended by Stendhal and Sainte-Beuve. His mother hosted her own salon, which women could attend as well as men. There, in 1822 or 1823, Eugène met Prosper Mérimée, a writer who would play a decisive role in his career. In 1825 he began his education at the Pension Moran, in Fontenay-aux-Roses. He returned to Paris in 1829 as a student at the college de Bourbon (now the Lycée Condorcet). He passed his baccalaureate examination in 1830. His uncle urged him to enter the École des Beaux-Arts, which had been created in 1806, but the École had an extremely rigid system, based entirely on copying classical models, and Eugène was not interested. Instead he decided to get practical experience in the architectural offices of Jacques-Marie Huvé and Achille Leclère, while devoting much of his time to drawing medieval churches and monuments around Paris. At sixteen he participated in the July 1830 revolution which overthrew Charles X, building a barricade. Following the revolution, which brought Louis Philippe to power, his father became chief of the bureau of royal residences. The new government created, for the first time, the position of Inspector General of Historic Monuments. Eugène's uncle Delécluze agreed to take Eugène on a long tour of France to see monuments. They travelled from July to October 1831 throughout the south of France, and he returned with a large collection of detailed paintings and watercolours of churches and monuments. On his return to Paris, he moved with his family into the Tuileries Palace, where his father was now governor of royal residences. His family again urged him to attend the École des Beaux-Arts, but he still refused. He wrote in his journal in December 1831, "the École is just a mould for architects. they all come out practically identical." He was a talented and meticulous artist; he travelled around France to visit monuments, cathedrals, and other medieval architecture, made detailed drawings and watercolours. In 1834, at the age of twenty, he married Élisabeth Templier, and in the same year he was named an associate professor of ornamental decoration at the Royal School of Decorative Arts, which gave him a more regular income. His first pupils there included Léon Gaucherel. With the money from the sale of his drawings and paintings, Viollet-le-Duc set off on a long tour of the monuments of Italy, visiting Rome, Venice, Florence and other sites, drawing and painting. In 1838, he presented several of his drawings at the Paris Salon, and began making a travel book, Picturesque and romantic images of the old France, for which, between 1838 and 1844, he made nearly three hundred engravings. In October 1838, with the recommendation of Achille Leclère, the architect with whom he had trained, he was named deputy inspector of the enlargement of the Hôtel Soubise, the new home of the French National Archives. His uncle, Delécluze, then recommended him to the new Commission of Historic Monuments of France, led by Prosper Mérimée, who had just published a book on medieval French monuments. Though he was just twenty-four years old and had no degree in architecture, he was asked to go to Narbonne to propose a plan for the completion of the cathedral there. The project was rejected by the local authorities as too ambitious and too expensive. His first real project was a restoration of the Vézelay Abbey, which many considered as impossible. The church had been sacked by the Huguenots in 1569, and during the French Revolution, the facade and statuary on the facade were destroyed. The vaults of the roof were weakened, and many of the stones had been carried off for other projects. When Mérimée visited to inspect the structure he heard stones falling around him. In February 1840 he gave Viollet-le-Duc the mission of restoring and reconstructing the church so it would not collapse, while "respecting exactly in his project of restoration all the ancient dispositions of the church". The task was all the more difficult because up until that time no scientific studies had been made of medieval building techniques, and there were no schools of restoration. He had no plans for the original building to work from. Viollet-le-Duc had to discover the flaws of construction that had caused the building to start to collapse in the first place and to construct a more solid and stable structure. He lightened the roof and built new arches to stabilize the structure, and slightly changed the shape of the vaults and arches. He was criticized for these modifications in the 1960s, though, as his defenders pointed out, without them the roof would have collapsed under its own weight. Mérimée's deputy, Lenormant, inspected the construction and reported to Mérimée: "The young Leduc seems entirely worthy of your confidence. He needed a magnificent audacity to take charge of such a desperate enterprise; it's certain that he arrived just in time, and if we had waited only ten years the church would have been a pile of stones." This restoration work lasted 19 years. Viollet-le-Duc's success at Vezelay led to a large series of projects. In 1840, in collaboration with his friend the architect Jean-Baptiste Lassus he began the restoration of Sainte-Chapelle in Paris, which had been turned into a storage depot after the Revolution. In February 1843, King Louis Philippe sent him to the Château of Amboise, to restore the stained glass windows in the chapel holding the tomb of Leonardo da Vinci. The windows were unfortunately destroyed in 1940 during World War II. In 1843, Mérimée took Viollet-le-Duc with him to Burgundy and the south of France, on one of his long inspection tours of monuments. Viollet-le-Duc made drawings of the buildings and wrote detailed accounts of each site, illustrated with his drawing, which were published in architectural journals. With his experience he became the most prominent academic scholar on French medieval architecture and his medieval dictionnary, with over 4000 drawings, contains the largest iconography on the subject to this day. In 1844, with the backing of Mérimée, Viollet-le-Duc, just thirty years old, and Lassus, then thirty-seven, won a competition for the restoration of Notre-Dame Cathedral which lasted twenty-five years. Their project involved primarily the facade, where many of the statues over the portals had been beheaded or smashed during the Revolution. They proposed two major changes to the interior: rebuilding two of the bays to their original medieval height of four storeys, and removing the marble neoclassical structures and decoration which had been added to the choir during the reign of Louis XIV. Mérimée warned them to be careful: "In such a project, one cannot act with too much prudence or discretion...A restoration may be more disastrous for a monument than the ravages of centuries." The Commission on Historical Monuments approved most of Viollet-le-Duc's plans, but rejected his proposal to remove the choir built under Louis XIV. Viollet-le-Duc himself turned down a proposal to add two new spires atop the towers, arguing that such a monument "would be remarkable but would not be Notre-Dame de Paris". Instead, he proposed to rebuild the original medieval spire and bell tower over the transept, which had been removed in 1786 because it was unstable in the wind. Once the project was approved, Viollet-le-Duc made drawings and photographs of the existing decorative elements; then they were removed and a stream of sculptors began making new statues of saints, gargoyles, chimeras and other architectural elements in a workshop he established, working from his drawings and photographs of similar works in other cathedrals of the same period. He also designed a new treasury in the Gothic style to serve as the museum of the cathedral, replacing the residence of the Archbishop, which had been destroyed in a riot in 1831. The bells in the two towers had been taken out in 1791 and melted down to make cannons. Viollet-le-Duc had new bells cast for the north tower and a new structure built inside to support them. Viollet-le-Duc and Lassus also rebuilt the sacristy, on the south side of the church, which had been built in 1756, but had been burned by rioters during the July Revolution of 1830. The new spire was completed, taller and more strongly built to withstand the weather; it was decorated with statues of the apostles, and the face of Saint Thomas, patron saint of architects, bore a noticeable resemblance to Viollet-le-Duc. The spire was destroyed on 15 April 2019, as a result of the Notre-Dame de Paris fire. When not engaged in Paris, Viollet-le-Duc continued his long tours into the French provinces, inspecting and checking the progress of more than twenty different restoration projects that were under his control, including seven in Burgundy alone. New projects included the Basilica of Saint-Sernin, Toulouse, and the Basilica of Saint-Denis just outside Paris. Saint-Denis had undergone a restoration by a different architect, Francois Debret, who had rebuilt one of the two towers. However, in 1846, the new tower, overloaded with masonry, began to crack, and Viollet-le-Duc was called in. He found no way the building could be saved and had to oversee the demolition of the tower, saving the stones. He concentrated on restoring the interior of the church, and was able to restore the original burial chamber of the kings of France. In May 1849, he was named the architect for the restoration of Amiens Cathedral, one of the largest in France, which had been built over many centuries in a variety of different styles. He wrote, "his goal should be to save in each part of the monument its own character, and yet to make it so that the united parts don't conflict with each other; and that can be maintained in a state that is durable and simple." The French coup d'état of 1851 brought Napoleon III to power and transformed France from a republic to an empire. The coup accelerated some of Viollet-le-Duc's projects as his patron Prosper Mérimée had introduced him to the new Emperor. He moved forward with the slow work of restoration of the Cathedral of Reims and Cathedral of Amiens. In Amiens, he cleared the interior of the French classical decoration added under Louis XIV, and proposed to make it resolutely Gothic. He gave the Emperor a tour of his project in September 1853; the Empress immediately offered to pay two-thirds of the cost of the restoration. In the same year he undertook the restoration of the Château de Vincennes, long occupied by the military, along with its chapel, similar to Sainte-Chapelle. A devotee of the pure Gothic, he described the chapel as "one of the finest specimens of Gothic in decline". In November 1853, he provided the costs and plans for the medieval ramparts of Carcassonne which he had first begun planning in 1849. The first fortifications had been built by the Visigoths; on top of these, in the Middle Ages Louis XI and then Philip the Bold had built a formidable series of towers, galleries, walls, gates and interlocking defences that resisted all sieges until 1355. The fortifications were largely intact, since the surroundings of the city were still a military defensive zone in the 19th century, but the towers were without tops and a large number of structures had been built up against the old walls. Once he obtained funding and made his plans, he began demolishing all structures which had been added to the ramparts over the centuries, and restored the gates, walls and towers to their original form, including the defence platforms, roofs on the towers and shelters for archers that would have been used during a siege. He found many of the original mountings for weapons still in place. To accompany his work, he published a detailed history of the city and its fortifications, with his drawings. Carcassonne became the best example of medieval military architecture in France, and also an important tourist attraction. Napoleon III provided additional funding for the continued restoration of Notre-Dame. Viollet-le-Duc was also to replace the great bestiary of mythical beasts and animals which had decorated the cathedral in the 18th century. In 1856, using examples from other medieval churches and debris from Notre-Dame as his model, his workshop produced dragons, chimeras, grotesques, and gargoyles, as well as an assortment of picturesque pinnacles and fleurons. He engaged in a new project for restoration of the Cathedral of Clermont-Ferrand, a project which continued for ten years. He also undertook an unusual project for Napoleon III; the design and construction of six railway coaches with neo-Gothic interior décor for the Emperor and his entourage. Two of the cars still exist; the salon of honour car, with a fresco on the ceiling, is at the Château de Compiègne, and the dining car, with a massive golden eagle as the centrepiece of the décor, is at the Railroad Museum of Mulhouse. Napoleon III asked Viollet-le-Duc if he could restore a medieval chateau for the Emperor's own use near Compiègne, where the Emperor traditionally passed September and October. Viollet-le-Duc first studied a restoration of the Château de Coucy, which had the highest medieval tower in France. When this proved too complicated, he settled upon Château de Pierrefonds, a castle begun by Louis of Orleans in 1396, then dismantled in 1617 after several sieges by Louis XIII of France. Napoleon bought the ruin for 5000 francs in 1812, and Mérimée declared it an historic monument in 1848. In 1857 Viollet-le-Duc began designing an entirely new chateau on the ruins. This structure was not designed to recreate anything exactly that had existed, but a castle which recaptured the spirit of the Gothic, with lavish neo-Gothic decoration and 19th-century comforts. Pierrefonds and its inside decorations would not only influence William Burges and his Cardiff and Coch castles but also the castles of Ludwig II of Bavaria (Neuschwanstein Castle) and the Haut-Kœnigsbourg of the Emperor Wilhelm II. While most of his attention was devoted to restorations, Viollet-le-Duc designed and built a number of private residences and new buildings in Paris. He also participated in the most important competition of the period, for the new Paris Opera. There were one hundred seventy-one projects proposed in the original competition, presented the 1855 Paris Universal Exposition. A jury of noted architects narrowed it down to five, including projects from Viollet-le-Duc and Charles Garnier, age thirty-five. Viollet-le-Duc was finally eliminated and this put an end to Viollet le Duc's wish to construct public buildings. Napoleon III also called upon Viollet-le-Duc for a wide variety of archeological and architectural tasks. When he wished to put up a monument to mark the Battle of Alesia, where Julius Caesar defeated the Gauls, a siege whose actual site was disputed by historians, he asked Viollet-le-Duc to locate the exact battlefield. Viollet-le-Duc conducted excavations at various purported sites, and finally found vestiges of the walls built at the time. He also designed the metal frame for the six-metre-high statue of the Gallic chief Vercingétorix that would be placed on the site. He later designed a similar frame for a much larger statue, the Statue of Liberty, but died before that statue was finished. In 1863, Viollet-le-Duc was named a professor at the École des Beaux-Arts, the school where he had refused to become a student. In the fortress of neoclassical Beaux-Arts architecture there was much resistance against him, but he attracted two hundred students to his course, who applauded his lecture at the end. But while he had many supporters, the faculty professors and certain students campaigned against him. His critics complained that, aside from having little formal architectural training himself, he had only built a handful of new buildings. He tired of the confrontations and resigned on 16 May 1863, and continued his writing and teaching outside the Beaux-Arts. In response to the Beaux-Arts he initiated the creation of the École Spéciale d'Architecture in Paris in 1865. In the beginning of 1864, he celebrated the conclusion of his most important project, the restoration of Notre-Dame. In January of the same year he completed the first phase of the restoration of the Cathedral of Saint Sernin in Toulouse, one of the landmarks of French Romanesque architecture. Napoleon III invited Viollet-le-Duc to study possible restorations overseas, including in Algeria, Corsica, and in Mexico, where Napoleon had installed a new Emperor, Maximilian, under French sponsorship. He also saw the consecration of the third church that he had designed, the neo-Gothic church of Saint-Denis de l'Estree, in the Paris suburb of Saint-Denis. Between 1866 and 1870, his major project was the ongoing transformation of Pierrefonds from a ruin into a royal residence. His plans for the metal framework he had designed for Pierrefonds were displayed at the Paris Universal Exposition of 1867. He also began a new area of study, researching the geology and geography of the region around Mont Blanc in the Alps. While on his mapping excursion in the Alps in July 1870, he learned that war had been declared between Prussia and France. As the Franco-Prussian War commenced, Viollet-le-Duc hurried back to Paris, and offered his services as a military engineer; he was put into service as a colonel of engineers, preparing the defenses of Paris. In September, the Emperor was captured at the Battle of Sedan, a new Republican government took power, and the Empress Eugénie fled into exile, as Germans marched as far as Paris and put it under siege. At the same time, on September 23, Viollet-le-Duc's primary patron and supporter, Prosper Mérimée, died peacefully in the south of France. Viollet-le-Duc supervised the construction of new defensive works outside Paris. The war was a disaster as he wrote in his journal on the 14th December 1870: "Disorganization is everywhere. The officers have no confidence in the troops, and the troops have no confidence in the officers. Each day, new orders and new projects which contravene those of the day before." He fought with the French army against the Germans at Buzenval on 24 January 1871. The battle was lost, and the French capitulated on 28 January. Viollet-le-Duc wrote to his wife on February 28, "I don't know what will become of me, but I do not want to return any more to administration. I am disgusted by it forever, and want nothing more than to pass the years that remain to me in study and in the most modest possible life." Always the scholar, he wrote a detailed study of the effectiveness and deficiencies of the fortifications of Paris during the siege, which was to be used for the 1917 defense of Verdun and the construction of the Maginot line in 1938. In May 1871 he left his home in Paris just before national guardsmen arrived to draft him into the armed force of the Paris Commune who subsequently condemned him to death. He escaped to Pierrefonds, where he had a small apartment before going in exile in Lausanne, where he engage in his passion for mountains, making detailed maps and a series of thirty-two drawings of the alpine scenery. While in Lausanne he was also asked to undertake the restoration of the cathedral. He returned later to Paris after the Commune had been suppressed and saw the ruins of most of the public buildings of the city, burned by the Commune in its last days. He received his only commission from the new government of the French Third Republic; Jules Simon, the new Minister of Culture and Public Instruction, asked him to design a plaque to be placed before Notre-Dame to honor the hostages killed by the Paris Commune in its final days. The new government of the French Third Republic made little use of his expertise in the restoration of the major government buildings which had been burned by the Paris Commune, including the Tuileries Palace, the Palace of the Legion of Honor, the Palais-Royal, the library of the Louvre, the Ministry of Justice and the Ministry of Finance. The only reconstruction on which he was consulted was that of the Hotel de Ville. The writer Edmond de Goncourt called for leaving the ruin of the Hotel de Ville exactly as it was, "a ruin of a magical palace, A marvel of the picturesque. The country should not condemn it without appeal to restoration by Viollet-le-Duc." The government asked Viollet-le-Duc to organize a competition. He presented two options; to either restore the building to its original state, with its historic interior; or to demolish it and build a new city hall. In July 1872 the government decided to preserve the Renaissance facade, but otherwise to completely demolish and rebuild the building. Throughout his life Viollet le Duc wrote over 100 publications on architecture, decoration, history, archeology etc.... some of which would become international best-sellers: Dictionary of French Architecture from 11th to 16th Century (1854–1868), Entretiens sur l'architecture (1863–1872), L'histoire d'une Maison (1873) and Histoire d'un Dessinateur: Comment on Apprend à Dessiner (1879). In his Entretiens sur l'architecture he concentrated in particular on the use of iron and other new materials, and the importance of designing buildings whose architecture was adapted to their function, rather than to a particular style. The book was translated into English in 1881 and won a large following in the United States. The Chicago architect Louis Sullivan, one of the inventors of the skyscraper, often invoked the phrase, "Form follows function." Lausanne Cathedral was his final major restoration project; it was rebuilt following his plans between 1873 and 1876. Work continued after his death. His reconstruction of the bell tower was later criticized; he eliminated the original octagonal base and added a new spire, which rested on the walls, and not on the vaulting, like the original spire. He also added new decoration, crowning the spire at mid-height with gables, another original element, and removing the original tiles. He was also criticized for the materials and ornaments he added to the towers, including gargoyles. His structural design was preserved, but in 1925 his gargoyles and original ornamentation were removed, and the spire was recovered with tiles. His reputation had reached outside of France. The spire and roof of Strasbourg Cathedral had been damaged by German artillery during the Franco-Prussian War, and the city was now part of Germany. The German government invited Viollet-le-Duc to comment on their plans for the restoration, which involved a more grandiose Romanesque tower. Viollet-le-Duc informed the German architect that the planned new tower was completely out of character with the original facade and style of the cathedral. His advice was accepted, and the church was restored to its original form. In 1872 Viollet-le-Duc was engaged in the reconstruction of the Château d'Amboise, owned by the descendants of the former King, Louis-Philippe. The chateau had been confiscated by Napoleon III in 1848 but was returned to the family in 1872. It was a massive project to turn it into a residence, involving at times three hundred workers. Viollet-le-Duc designed all the work to the finest details, including the floor tiles, the gas lights in the salons, the ovens in the kitchen, and the electric bells for summoning servants. In 1874 Viollet-le-Duc resigned as diocesan architect of Paris and was succeeded by his contemporary, Paul Abadie. In his final years, he continued to supervise the restoration projects that were underway for the Commission of Historical Monuments. He engaged in polemics about architecture in the press, and was elected to the Paris municipal council. While planning the design and construction of the Statue of Liberty (Liberty Enlightening the World) sculptor Frédéric Auguste Bartholdi interested Viollet-le-Duc, his friend and mentor, in the project. As chief engineer, Viollet-le-Duc designed a brick pier within the statue, to which the skin would be anchored. After consultations with the metalwork foundry Gaget, Gauthier & Co., Viollet-le-Duc chose the metal which would be used for the skin, copper sheets, and the method used to shape it, repoussé, in which the sheets were heated and then struck with wooden hammers. An advantage of this choice was that the entire statue would be light for its volume, as the copper need be only 0.094 inches (2.4 mm) thick. He became engaged in the planning and construction of the Paris Universal Exposition of 1878. He proposed to the Minister of Education, Jules Ferry, that the Trocadéro Palace, the main building of the Exposition on the hilltop of Chaillot, be transformed after the Exposition into a museum of French monuments, displaying models of architecture and sculpture from landmarks around France. This idea was accepted. The National Museum of French Monuments opened in 1882, after his death. The Palais was reconstructed into the Palais de Chaillot in 1937, but the Museum of French Monuments was preserved and can be seen there today. In his final years his son Eugène-Louis became the head of the Commission of Historic Monuments. He took on just one new project, the restoration of the cloister of the Augustines at Toulouse. He completed his series of dictionaries of architectural periods, designed for a general audience. He also devoted more time to studying the geography of the Alps around Mont-Blanc. He spent his summers hiking in the mountains and writing articles about his travels. He launched a public campaign for the re-forestation of the Alps, and published a detailed map of the area in 1876. He spent more and more time at La Vedette, the villa he constructed in Lausanne, a house on the model of a Savoyard chalet, but with a minimum of decoration, illustrating his new doctrine of form following function. He made one last visit to inspect Carcassonne, whose work was now under his son's direction. After an exhausting summer of hiking in the Alps in 1879, he became ill and died in Lausanne on 17 September 1879. He was buried in the cemetery of La Sallaz in Lausanne. In 1946 his grave and monument were transferred to the Cemetery of Bois-le-Vaux (Section XVIII) in Lausanne. Viollet-le-Duc married Elisabeth Tempier in Paris on 3 May 1834. The couple had two children, but separated a few years after marriage, and spent little time together; he was continually on the road. The writer Geneviève Viollet-le-Duc (winner of the prix Broquette-Gonin in 1978) was his great-granddaughter. Viollet-le-Duc famously defined restoration in volume eight of his Dictionnaire raisonné de l'architecture française du XI au XVI siecle of 1858: "To restore a building is not to maintain it, repair it or remake it: it is to re-establish it in a complete state which may never have existed at any given moment." He then explained that it had to meet four conditions: (1) The "re-establishment" had to be scientifically documented with plans and photographs and archeological records, which would guarantee exactness. (2) The restoration had to involve not just the appearance of the monument, or the effect that it produced, but also its structure; it had to use the most efficient means to assure the long life of the building, including using more solid materials, used more wisely. (3) the restoration had to exclude any modification contrary to obvious evidence; but the structure could be adapted to conform to more modern or rational uses and practices, which meant alterations to the original plan; and (4) The restoration should preserve older modifications made to the building, with the exception of those which compromised its stability or its conservation, or those which gravely violated the value of its historical presence. He drew conclusions from medieval architecture that he applied to modern architecture. He noted that it was sometimes necessary to employ an iron frame in restoration to avoid the danger of fires, as long as the new structure was not heavier than the original, and kept the original balance of forces found in medieval structures. "The monuments of the Middle Ages were carefully calculated, and their organism is delicate. There is nothing in excess in their works, nothing useless. If you change one of the conditions of these organisms, you change all the others. Many people consider this a fault; for us, this is a quality which we too often neglect in our modern construction....Why should we build expensive walls two meters thick, if walls fifty centimeters thick [with reinforced supports], offer sufficient stability? In the structure of the Middle Ages, every portion of a work fulfilled a function and possessed an action." During the entire career of Viollet-le-Duc, he was engaged in a dispute with the doctrines of the École des Beaux-Arts, the leading architectural school of France, which he refused to attend as a student, and where he taught briefly as a professor, before being pressured to depart. In 1846 he engaged in a fervent exchange in print with Quatremère de Quincy, the Perpetual Secretary of the French Academy, on the question, "Is it suitable, in the 19th century, to build churches in the Gothic style?" De Quincy and his followers denounced the Gothic style as incoherent, disorderly, unintelligent, decadent and without taste. Viollet-le-Duc responded, "What we want, messieurs, is the return of an art which was born in our country....Leave to Rome what belongs to Rome, and to Athens what belongs to Athens. Rome didn't want our Gothic (and was perhaps the only one in Europe to reject it) and they were right, because when one has the good fortune to possess a national architecture, the best thing is to keep it." "If you study for a moment a church of the 13th century", he wrote, "you see that all of the construction is carried out according to an invariable system. All the forces and the weights are thrust out to the exterior, a disposition which gives the interior the greatest open space possible. The flying buttresses and contreforts alone support the entire structure, and always have an aspect of resistance, of force and stability which reassures the eye and the spirit; The vaults, built with materials that are easy to mount and to place at a great height, are combined in a easy that places the totality of their weight on the piles; that the most simple means are always employed...and that all the parts of these constructions, independent of each other, even as they rely on each other, present an elasticity and a lightness needed in a building of such great dimensions. We can still see (and this is only found in Gothic architecture) that human proportions are the one fixed rule." Viollet-le-Duc was often accused by certain critics, in his own time and later, of pursuing the spirit of the Gothic style in some of his restorations instead of strict historical accuracy. Many art historians also consider that the British architectural writer John Ruskin and William Morris were ferocious opponents of Viollet le Duc’s restorations. But Ruskin never criticised Viollet le Duc’s restoration work in itself, but criticised the principal of restoration itself. Indeed at the beginning of his career Ruskin had a very radical opinion on restoration: "a building should be looked after and if not it should be left to die". Viollet le Duc's position on the subject was more nuanced: "if a building has not been upkept it should be restored". The existence of an opposition between Ruskin and Viollet le Duc on restoration is today questioned by new research based on Ruskin's own writtings: "there is no book on architecture which has everything correct apart from Viollet le Duc’s Dictionnary". And at the end of his life Ruskin expressed the regret that "no one in England had done the work that Viollet le Duc had done in France". Viollet-le-Duc's restorations sometimes involved non-historical additions, either to assure the stability of the building, or sometimes simply to maintain the harmony of the design. The flèche or spire of Notre-Dame de Paris, which had been constructed in about 1250, was removed in 1786 after it was damaged by the wind. Viollet-le-Duc designed and constructed a new spire, ornamented with statuary, which was taller than the original and modified to resist the weather, but in harmony with the rest of the design. In the 19th and 20th century, his flèche was a target for critics. He was also criticized later for his modifications of the choir of Notre-Dame, which had been rebuilt in the Louis XIV style during the reign of that king. Viollet-le-Duc took out the old choir, including the altar where Napoleon Bonaparte had been crowned Emperor and replaced them with a Gothic altar and decoration which he designed. When he modified the choir, he also constructed new bays with small Gothic rose windows modelled on those in the church of Chars, in the Oise Valley. Some historians condemned these restorations as non-historical invention. His defenders pointed out that Viollet-le-Duc did not make any decisions on the restoration of Notre-Dame by himself; all of his plans were approved by Prosper Mérimée, the Inspector of Historical Monuments, and by the Commission of historic monuments. He was criticized for the abundance of Gothic gargoyles, chimeras, fleurons, and pinnacles which he added to Notre-Dame Cathedral. These decorations had existed in the Middle Ages but had largely been removed during the reign of Louis XIV. The last original gargoyles had been taken down in 1813. He modelled the new gargoyles and monsters on examples from other cathedrals of the period. He was later criticized also for the stained glass windows he designed and had made for the chapels around the ground level of the cathedral, which feature intricate Gothic designs in grisaille, which allow more light into the church. The contemporary view of the controversy of his restoration is summarized on a descriptive panel near the altar of the cathedral: "The great restoration, carried to fruition by Viollet-le-Duc following the death of Lassus, supplied new radiance to the cathedral – whatever reservations one might have about the choices that were made. The work of the nineteenth century is now as much a part of the architectural history of Notre-Dame as that undertaken in previous centuries." The restoration of ramparts of Carcassonne was also criticized in the 20th century. His critics pointed out that the pointed caps of the towers he constructed were more typical of northern France, not the region where Carcassonne was located, near the Spanish border. Similarly he added roofs of northern slate tiles rather than southern clay tiles, a choice that has been reversed in more recent restorations. His critics also claimed that Viollet-le-Duc sought a "condition of completeness" which never actually existed at any given time. The principal counter-argument made by Viollet-le-Duc's defenders was that, without his prompt restorations, many of the buildings that he restored would have been lost, and that he did the best that he could with the knowledge that was then available. Mortimer Wheeler's entry on English archaeologist Charles R Peers for the Dictionary of National Biography (1971) is worth quoting for its critique of Viollet-le-Duc: “he [Peers] laid down the principles which have governed architectural conservation in the United Kingdom and have served as a model in other parts of the world. His cardinal principle was to retain but not to restore the surviving remains of an ancient structure; and in this respect he departed emphatically from the tradition of Viollet-le-Duc and his successors in France and Italy, where exuberant restoration frequently obscured the evidence upon which it was based ...” Throughout his career Viollet-le-Duc made notes and drawings, not only for the buildings he was working on but also on Romanesque, Gothic and Renaissance buildings that were to be soon demolished. His notes were useful when preparing his published works. His study of medieval and Renaissance periods was not limited to architecture but extended also to such areas as furniture, clothing, musical instruments, armament, and geology. His work was published, first in serial form, and then as full-scale books, as: Viollet-le-Duc is considered by many to be the first theorist of modern architecture. Sir John Summerson wrote that "there have been two supremely eminent theorists in the history of European architecture – Leon Battista Alberti and Eugène Viollet-le-Duc." His architectural theory was largely based on finding the ideal forms for specific materials and using these forms to create buildings. His writings centered on the idea that materials should be used "honestly". He believed that the outward appearance of a building should reflect the rational construction of the building. In Entretiens sur l'architecture, Viollet-le-Duc praised the Greek temple for its rational representation of its construction. For him, "Greek architecture served as a model for the correspondence of structure and appearance." Another component in Viollet-le-Duc's theory was how the design of a building should start from its program and the plan, and end with its decorations. If this resulted in an asymmetrical exterior, so be it. He dismissed the symmetry of classicist buildings as vain, caring too much about appearances at the expense of practicality and convenience for the inhabitants of the house. In several unbuilt projects for new buildings, Viollet-le-Duc applied the lessons he had derived from Gothic architecture, applying its rational structural systems to modern building materials such as cast iron. For inspiration, he also examined organic structures, such as leaves and animal skeletons. He was especially interested in the wings of bats, an influence represented by his Assembly Hall project. Viollet-le-Duc's drawings of iron trusswork were innovative for the time. Many of his designs emphasizing iron would later influence the Art Nouveau movement, most noticeably in the work of Hector Guimard, Victor Horta, Antoni Gaudí and Hendrik Petrus Berlage. His writings inspired several American architects, including Frank Furness, John Wellborn Root, Louis Sullivan, and Frank Lloyd Wright. Viollet-le-Duc had a second career in the military, primarily in the defense of Paris during the Franco-Prussian War (1870–71). He was so influenced by the conflict that during his later years he described the idealized defense of France by the analogy of the military history of Le Roche-Pont, an imaginary castle, in his work Histoire d'une Forteresse (Annals of a Fortress, twice translated into English). Accessible and well researched, it is partly fictional. Annals of a Fortress strongly influenced French military defensive thinking. Viollet-le-Duc's critique of the effect of artillery (applying his practical knowledge from the 1870–1871 war) is so complete that it accurately describes the principles applied to the defense of France until World War II. The physical results of his theories are present in the fortification of Verdun prior to World War I and the Maginot Line prior to World War II. His theories are also represented by the French military theory of "Deliberate Advance", which stresses that artillery and a strong system of fortresses in the rear of an army are essential. The English architect Benjamin Bucknall (1833–1895) was a devotee of Viollet-le-Duc and during 1874 to 1881 translated several of his publications into English to popularise his principles in Great Britain. The later works of the English designer and architect William Burges were greatly influenced by Viollet-le-Duc, most strongly in Burges's designs for his own home, The Tower House in London's Holland Park district, and Burges's designs for Castell Coch near Cardiff, Wales. An exhibition, Eugène Viollet-le-Duc 1814–1879 was presented in Paris in 1965, and there was a larger, centennial exhibition in 1980. Viollet-le-Duc was the subject of a Google Doodle on January 27, 2014.
[ { "paragraph_id": 0, "text": "Eugène Emmanuel Viollet-le-Duc (French: [øʒɛn vjɔlɛ lə dyk]; 27 January 1814 – 17 September 1879) was a French architect and author, famous for his restoration of the most prominent medieval landmarks in France. His major restoration projects included Notre-Dame de Paris, the Basilica of Saint Denis, Mont Saint-Michel, Sainte-Chapelle, the medieval walls of the city of Carcassonne, and Roquetaillade castle in the Bordeaux region.", "title": "" }, { "paragraph_id": 1, "text": "His writings on decoration and on the relationship between form and function in architecture had a fundamental influence on a whole new generation of architects, including all the major Art Nouveau artists: Antoni Gaudí, Victor Horta, Hector Guimard, Henry Van de Velde, Henry Sauvage and the Ecole de Nancy, Paul Hankar, Otto Wagner, Eugene Grasset, Emile Gallé and Hendrik Petrus Berlage. He also influences the first modern architects, Frank Lloyd Wright, Mies van der Rohe, Auguste Perret, Louis Sullivan and Le Corbusier, who considered Viollet-le-Duc as the father of modern architecture: \"The roots of modern architecture are to be found in Viollet-le-Duc\". His writings also influenced John Ruskin, William Morris and the Arts and Crafts movement. And at the 1862 international exhibition in London the esthetic works of Burne-Jones, Rossetti, Philip Webb, William Morris, Simeon Solomon et Edward Poynter are directly influenced from drawings in Viollet-le-Duc's Dictionnary. The English architect William Burges admitted in his late life \"We all cribbed on Viollet-le-Duc even though no one could read French.\"", "title": "" }, { "paragraph_id": 2, "text": "Viollet-le-Duc was born in Paris in 1814. His grandfather was an architect, and his father was a high-ranking civil servant, who in 1816 became the overseer of the royal residences of Louis XVIII. His uncle Étienne-Jean Delécluze was a painter, a former student of Jacques-Louis David, an art critic and hosted a literary salon, which was attended by Stendhal and Sainte-Beuve. His mother hosted her own salon, which women could attend as well as men. There, in 1822 or 1823, Eugène met Prosper Mérimée, a writer who would play a decisive role in his career.", "title": "Youth and education" }, { "paragraph_id": 3, "text": "In 1825 he began his education at the Pension Moran, in Fontenay-aux-Roses. He returned to Paris in 1829 as a student at the college de Bourbon (now the Lycée Condorcet). He passed his baccalaureate examination in 1830. His uncle urged him to enter the École des Beaux-Arts, which had been created in 1806, but the École had an extremely rigid system, based entirely on copying classical models, and Eugène was not interested. Instead he decided to get practical experience in the architectural offices of Jacques-Marie Huvé and Achille Leclère, while devoting much of his time to drawing medieval churches and monuments around Paris.", "title": "Youth and education" }, { "paragraph_id": 4, "text": "At sixteen he participated in the July 1830 revolution which overthrew Charles X, building a barricade. Following the revolution, which brought Louis Philippe to power, his father became chief of the bureau of royal residences. The new government created, for the first time, the position of Inspector General of Historic Monuments. Eugène's uncle Delécluze agreed to take Eugène on a long tour of France to see monuments. They travelled from July to October 1831 throughout the south of France, and he returned with a large collection of detailed paintings and watercolours of churches and monuments.", "title": "Youth and education" }, { "paragraph_id": 5, "text": "On his return to Paris, he moved with his family into the Tuileries Palace, where his father was now governor of royal residences. His family again urged him to attend the École des Beaux-Arts, but he still refused. He wrote in his journal in December 1831, \"the École is just a mould for architects. they all come out practically identical.\" He was a talented and meticulous artist; he travelled around France to visit monuments, cathedrals, and other medieval architecture, made detailed drawings and watercolours. In 1834, at the age of twenty, he married Élisabeth Templier, and in the same year he was named an associate professor of ornamental decoration at the Royal School of Decorative Arts, which gave him a more regular income. His first pupils there included Léon Gaucherel.", "title": "Youth and education" }, { "paragraph_id": 6, "text": "With the money from the sale of his drawings and paintings, Viollet-le-Duc set off on a long tour of the monuments of Italy, visiting Rome, Venice, Florence and other sites, drawing and painting. In 1838, he presented several of his drawings at the Paris Salon, and began making a travel book, Picturesque and romantic images of the old France, for which, between 1838 and 1844, he made nearly three hundred engravings.", "title": "Youth and education" }, { "paragraph_id": 7, "text": "In October 1838, with the recommendation of Achille Leclère, the architect with whom he had trained, he was named deputy inspector of the enlargement of the Hôtel Soubise, the new home of the French National Archives. His uncle, Delécluze, then recommended him to the new Commission of Historic Monuments of France, led by Prosper Mérimée, who had just published a book on medieval French monuments. Though he was just twenty-four years old and had no degree in architecture, he was asked to go to Narbonne to propose a plan for the completion of the cathedral there. The project was rejected by the local authorities as too ambitious and too expensive.", "title": "First architectural restorations" }, { "paragraph_id": 8, "text": "His first real project was a restoration of the Vézelay Abbey, which many considered as impossible. The church had been sacked by the Huguenots in 1569, and during the French Revolution, the facade and statuary on the facade were destroyed. The vaults of the roof were weakened, and many of the stones had been carried off for other projects. When Mérimée visited to inspect the structure he heard stones falling around him. In February 1840 he gave Viollet-le-Duc the mission of restoring and reconstructing the church so it would not collapse, while \"respecting exactly in his project of restoration all the ancient dispositions of the church\".", "title": "First architectural restorations" }, { "paragraph_id": 9, "text": "The task was all the more difficult because up until that time no scientific studies had been made of medieval building techniques, and there were no schools of restoration. He had no plans for the original building to work from. Viollet-le-Duc had to discover the flaws of construction that had caused the building to start to collapse in the first place and to construct a more solid and stable structure. He lightened the roof and built new arches to stabilize the structure, and slightly changed the shape of the vaults and arches. He was criticized for these modifications in the 1960s, though, as his defenders pointed out, without them the roof would have collapsed under its own weight. Mérimée's deputy, Lenormant, inspected the construction and reported to Mérimée: \"The young Leduc seems entirely worthy of your confidence. He needed a magnificent audacity to take charge of such a desperate enterprise; it's certain that he arrived just in time, and if we had waited only ten years the church would have been a pile of stones.\" This restoration work lasted 19 years.", "title": "First architectural restorations" }, { "paragraph_id": 10, "text": "Viollet-le-Duc's success at Vezelay led to a large series of projects. In 1840, in collaboration with his friend the architect Jean-Baptiste Lassus he began the restoration of Sainte-Chapelle in Paris, which had been turned into a storage depot after the Revolution. In February 1843, King Louis Philippe sent him to the Château of Amboise, to restore the stained glass windows in the chapel holding the tomb of Leonardo da Vinci. The windows were unfortunately destroyed in 1940 during World War II.", "title": "Sainte-Chapelle and Amboise" }, { "paragraph_id": 11, "text": "In 1843, Mérimée took Viollet-le-Duc with him to Burgundy and the south of France, on one of his long inspection tours of monuments. Viollet-le-Duc made drawings of the buildings and wrote detailed accounts of each site, illustrated with his drawing, which were published in architectural journals. With his experience he became the most prominent academic scholar on French medieval architecture and his medieval dictionnary, with over 4000 drawings, contains the largest iconography on the subject to this day.", "title": "Sainte-Chapelle and Amboise" }, { "paragraph_id": 12, "text": "In 1844, with the backing of Mérimée, Viollet-le-Duc, just thirty years old, and Lassus, then thirty-seven, won a competition for the restoration of Notre-Dame Cathedral which lasted twenty-five years. Their project involved primarily the facade, where many of the statues over the portals had been beheaded or smashed during the Revolution. They proposed two major changes to the interior: rebuilding two of the bays to their original medieval height of four storeys, and removing the marble neoclassical structures and decoration which had been added to the choir during the reign of Louis XIV. Mérimée warned them to be careful: \"In such a project, one cannot act with too much prudence or discretion...A restoration may be more disastrous for a monument than the ravages of centuries.\" The Commission on Historical Monuments approved most of Viollet-le-Duc's plans, but rejected his proposal to remove the choir built under Louis XIV. Viollet-le-Duc himself turned down a proposal to add two new spires atop the towers, arguing that such a monument \"would be remarkable but would not be Notre-Dame de Paris\". Instead, he proposed to rebuild the original medieval spire and bell tower over the transept, which had been removed in 1786 because it was unstable in the wind.", "title": "Notre-Dame de Paris" }, { "paragraph_id": 13, "text": "Once the project was approved, Viollet-le-Duc made drawings and photographs of the existing decorative elements; then they were removed and a stream of sculptors began making new statues of saints, gargoyles, chimeras and other architectural elements in a workshop he established, working from his drawings and photographs of similar works in other cathedrals of the same period. He also designed a new treasury in the Gothic style to serve as the museum of the cathedral, replacing the residence of the Archbishop, which had been destroyed in a riot in 1831.", "title": "Notre-Dame de Paris" }, { "paragraph_id": 14, "text": "The bells in the two towers had been taken out in 1791 and melted down to make cannons. Viollet-le-Duc had new bells cast for the north tower and a new structure built inside to support them. Viollet-le-Duc and Lassus also rebuilt the sacristy, on the south side of the church, which had been built in 1756, but had been burned by rioters during the July Revolution of 1830. The new spire was completed, taller and more strongly built to withstand the weather; it was decorated with statues of the apostles, and the face of Saint Thomas, patron saint of architects, bore a noticeable resemblance to Viollet-le-Duc. The spire was destroyed on 15 April 2019, as a result of the Notre-Dame de Paris fire.", "title": "Notre-Dame de Paris" }, { "paragraph_id": 15, "text": "When not engaged in Paris, Viollet-le-Duc continued his long tours into the French provinces, inspecting and checking the progress of more than twenty different restoration projects that were under his control, including seven in Burgundy alone. New projects included the Basilica of Saint-Sernin, Toulouse, and the Basilica of Saint-Denis just outside Paris. Saint-Denis had undergone a restoration by a different architect, Francois Debret, who had rebuilt one of the two towers. However, in 1846, the new tower, overloaded with masonry, began to crack, and Viollet-le-Duc was called in. He found no way the building could be saved and had to oversee the demolition of the tower, saving the stones. He concentrated on restoring the interior of the church, and was able to restore the original burial chamber of the kings of France.", "title": "Saint-Denis and Amiens" }, { "paragraph_id": 16, "text": "In May 1849, he was named the architect for the restoration of Amiens Cathedral, one of the largest in France, which had been built over many centuries in a variety of different styles. He wrote, \"his goal should be to save in each part of the monument its own character, and yet to make it so that the united parts don't conflict with each other; and that can be maintained in a state that is durable and simple.\"", "title": "Saint-Denis and Amiens" }, { "paragraph_id": 17, "text": "The French coup d'état of 1851 brought Napoleon III to power and transformed France from a republic to an empire. The coup accelerated some of Viollet-le-Duc's projects as his patron Prosper Mérimée had introduced him to the new Emperor. He moved forward with the slow work of restoration of the Cathedral of Reims and Cathedral of Amiens. In Amiens, he cleared the interior of the French classical decoration added under Louis XIV, and proposed to make it resolutely Gothic. He gave the Emperor a tour of his project in September 1853; the Empress immediately offered to pay two-thirds of the cost of the restoration. In the same year he undertook the restoration of the Château de Vincennes, long occupied by the military, along with its chapel, similar to Sainte-Chapelle. A devotee of the pure Gothic, he described the chapel as \"one of the finest specimens of Gothic in decline\".", "title": "Imperial projects: Carcassonne, Vincennes and Pierrefonds" }, { "paragraph_id": 18, "text": "In November 1853, he provided the costs and plans for the medieval ramparts of Carcassonne which he had first begun planning in 1849. The first fortifications had been built by the Visigoths; on top of these, in the Middle Ages Louis XI and then Philip the Bold had built a formidable series of towers, galleries, walls, gates and interlocking defences that resisted all sieges until 1355. The fortifications were largely intact, since the surroundings of the city were still a military defensive zone in the 19th century, but the towers were without tops and a large number of structures had been built up against the old walls. Once he obtained funding and made his plans, he began demolishing all structures which had been added to the ramparts over the centuries, and restored the gates, walls and towers to their original form, including the defence platforms, roofs on the towers and shelters for archers that would have been used during a siege. He found many of the original mountings for weapons still in place. To accompany his work, he published a detailed history of the city and its fortifications, with his drawings. Carcassonne became the best example of medieval military architecture in France, and also an important tourist attraction.", "title": "Imperial projects: Carcassonne, Vincennes and Pierrefonds" }, { "paragraph_id": 19, "text": "Napoleon III provided additional funding for the continued restoration of Notre-Dame. Viollet-le-Duc was also to replace the great bestiary of mythical beasts and animals which had decorated the cathedral in the 18th century. In 1856, using examples from other medieval churches and debris from Notre-Dame as his model, his workshop produced dragons, chimeras, grotesques, and gargoyles, as well as an assortment of picturesque pinnacles and fleurons. He engaged in a new project for restoration of the Cathedral of Clermont-Ferrand, a project which continued for ten years. He also undertook an unusual project for Napoleon III; the design and construction of six railway coaches with neo-Gothic interior décor for the Emperor and his entourage. Two of the cars still exist; the salon of honour car, with a fresco on the ceiling, is at the Château de Compiègne, and the dining car, with a massive golden eagle as the centrepiece of the décor, is at the Railroad Museum of Mulhouse.", "title": "Imperial projects: Carcassonne, Vincennes and Pierrefonds" }, { "paragraph_id": 20, "text": "Napoleon III asked Viollet-le-Duc if he could restore a medieval chateau for the Emperor's own use near Compiègne, where the Emperor traditionally passed September and October. Viollet-le-Duc first studied a restoration of the Château de Coucy, which had the highest medieval tower in France. When this proved too complicated, he settled upon Château de Pierrefonds, a castle begun by Louis of Orleans in 1396, then dismantled in 1617 after several sieges by Louis XIII of France. Napoleon bought the ruin for 5000 francs in 1812, and Mérimée declared it an historic monument in 1848. In 1857 Viollet-le-Duc began designing an entirely new chateau on the ruins. This structure was not designed to recreate anything exactly that had existed, but a castle which recaptured the spirit of the Gothic, with lavish neo-Gothic decoration and 19th-century comforts. Pierrefonds and its inside decorations would not only influence William Burges and his Cardiff and Coch castles but also the castles of Ludwig II of Bavaria (Neuschwanstein Castle) and the Haut-Kœnigsbourg of the Emperor Wilhelm II.", "title": "Imperial projects: Carcassonne, Vincennes and Pierrefonds" }, { "paragraph_id": 21, "text": "While most of his attention was devoted to restorations, Viollet-le-Duc designed and built a number of private residences and new buildings in Paris. He also participated in the most important competition of the period, for the new Paris Opera. There were one hundred seventy-one projects proposed in the original competition, presented the 1855 Paris Universal Exposition. A jury of noted architects narrowed it down to five, including projects from Viollet-le-Duc and Charles Garnier, age thirty-five. Viollet-le-Duc was finally eliminated and this put an end to Viollet le Duc's wish to construct public buildings.", "title": "Imperial projects: Carcassonne, Vincennes and Pierrefonds" }, { "paragraph_id": 22, "text": "Napoleon III also called upon Viollet-le-Duc for a wide variety of archeological and architectural tasks. When he wished to put up a monument to mark the Battle of Alesia, where Julius Caesar defeated the Gauls, a siege whose actual site was disputed by historians, he asked Viollet-le-Duc to locate the exact battlefield. Viollet-le-Duc conducted excavations at various purported sites, and finally found vestiges of the walls built at the time. He also designed the metal frame for the six-metre-high statue of the Gallic chief Vercingétorix that would be placed on the site. He later designed a similar frame for a much larger statue, the Statue of Liberty, but died before that statue was finished.", "title": "Imperial projects: Carcassonne, Vincennes and Pierrefonds" }, { "paragraph_id": 23, "text": "In 1863, Viollet-le-Duc was named a professor at the École des Beaux-Arts, the school where he had refused to become a student. In the fortress of neoclassical Beaux-Arts architecture there was much resistance against him, but he attracted two hundred students to his course, who applauded his lecture at the end. But while he had many supporters, the faculty professors and certain students campaigned against him. His critics complained that, aside from having little formal architectural training himself, he had only built a handful of new buildings. He tired of the confrontations and resigned on 16 May 1863, and continued his writing and teaching outside the Beaux-Arts. In response to the Beaux-Arts he initiated the creation of the École Spéciale d'Architecture in Paris in 1865.", "title": "End of the Empire and of Restoration" }, { "paragraph_id": 24, "text": "In the beginning of 1864, he celebrated the conclusion of his most important project, the restoration of Notre-Dame. In January of the same year he completed the first phase of the restoration of the Cathedral of Saint Sernin in Toulouse, one of the landmarks of French Romanesque architecture. Napoleon III invited Viollet-le-Duc to study possible restorations overseas, including in Algeria, Corsica, and in Mexico, where Napoleon had installed a new Emperor, Maximilian, under French sponsorship. He also saw the consecration of the third church that he had designed, the neo-Gothic church of Saint-Denis de l'Estree, in the Paris suburb of Saint-Denis. Between 1866 and 1870, his major project was the ongoing transformation of Pierrefonds from a ruin into a royal residence. His plans for the metal framework he had designed for Pierrefonds were displayed at the Paris Universal Exposition of 1867. He also began a new area of study, researching the geology and geography of the region around Mont Blanc in the Alps. While on his mapping excursion in the Alps in July 1870, he learned that war had been declared between Prussia and France.", "title": "End of the Empire and of Restoration" }, { "paragraph_id": 25, "text": "As the Franco-Prussian War commenced, Viollet-le-Duc hurried back to Paris, and offered his services as a military engineer; he was put into service as a colonel of engineers, preparing the defenses of Paris. In September, the Emperor was captured at the Battle of Sedan, a new Republican government took power, and the Empress Eugénie fled into exile, as Germans marched as far as Paris and put it under siege. At the same time, on September 23, Viollet-le-Duc's primary patron and supporter, Prosper Mérimée, died peacefully in the south of France. Viollet-le-Duc supervised the construction of new defensive works outside Paris. The war was a disaster as he wrote in his journal on the 14th December 1870: \"Disorganization is everywhere. The officers have no confidence in the troops, and the troops have no confidence in the officers. Each day, new orders and new projects which contravene those of the day before.\" He fought with the French army against the Germans at Buzenval on 24 January 1871. The battle was lost, and the French capitulated on 28 January. Viollet-le-Duc wrote to his wife on February 28, \"I don't know what will become of me, but I do not want to return any more to administration. I am disgusted by it forever, and want nothing more than to pass the years that remain to me in study and in the most modest possible life.\" Always the scholar, he wrote a detailed study of the effectiveness and deficiencies of the fortifications of Paris during the siege, which was to be used for the 1917 defense of Verdun and the construction of the Maginot line in 1938.", "title": "End of the Empire and of Restoration" }, { "paragraph_id": 26, "text": "In May 1871 he left his home in Paris just before national guardsmen arrived to draft him into the armed force of the Paris Commune who subsequently condemned him to death. He escaped to Pierrefonds, where he had a small apartment before going in exile in Lausanne, where he engage in his passion for mountains, making detailed maps and a series of thirty-two drawings of the alpine scenery. While in Lausanne he was also asked to undertake the restoration of the cathedral.", "title": "End of the Empire and of Restoration" }, { "paragraph_id": 27, "text": "He returned later to Paris after the Commune had been suppressed and saw the ruins of most of the public buildings of the city, burned by the Commune in its last days. He received his only commission from the new government of the French Third Republic; Jules Simon, the new Minister of Culture and Public Instruction, asked him to design a plaque to be placed before Notre-Dame to honor the hostages killed by the Paris Commune in its final days.", "title": "End of the Empire and of Restoration" }, { "paragraph_id": 28, "text": "The new government of the French Third Republic made little use of his expertise in the restoration of the major government buildings which had been burned by the Paris Commune, including the Tuileries Palace, the Palace of the Legion of Honor, the Palais-Royal, the library of the Louvre, the Ministry of Justice and the Ministry of Finance. The only reconstruction on which he was consulted was that of the Hotel de Ville. The writer Edmond de Goncourt called for leaving the ruin of the Hotel de Ville exactly as it was, \"a ruin of a magical palace, A marvel of the picturesque. The country should not condemn it without appeal to restoration by Viollet-le-Duc.\" The government asked Viollet-le-Duc to organize a competition. He presented two options; to either restore the building to its original state, with its historic interior; or to demolish it and build a new city hall. In July 1872 the government decided to preserve the Renaissance facade, but otherwise to completely demolish and rebuild the building.", "title": "End of the Empire and of Restoration" }, { "paragraph_id": 29, "text": "Throughout his life Viollet le Duc wrote over 100 publications on architecture, decoration, history, archeology etc.... some of which would become international best-sellers: Dictionary of French Architecture from 11th to 16th Century (1854–1868), Entretiens sur l'architecture (1863–1872), L'histoire d'une Maison (1873) and Histoire d'un Dessinateur: Comment on Apprend à Dessiner (1879).", "title": "Author and theorist" }, { "paragraph_id": 30, "text": "In his Entretiens sur l'architecture he concentrated in particular on the use of iron and other new materials, and the importance of designing buildings whose architecture was adapted to their function, rather than to a particular style. The book was translated into English in 1881 and won a large following in the United States. The Chicago architect Louis Sullivan, one of the inventors of the skyscraper, often invoked the phrase, \"Form follows function.\"", "title": "Author and theorist" }, { "paragraph_id": 31, "text": "Lausanne Cathedral was his final major restoration project; it was rebuilt following his plans between 1873 and 1876. Work continued after his death. His reconstruction of the bell tower was later criticized; he eliminated the original octagonal base and added a new spire, which rested on the walls, and not on the vaulting, like the original spire. He also added new decoration, crowning the spire at mid-height with gables, another original element, and removing the original tiles. He was also criticized for the materials and ornaments he added to the towers, including gargoyles. His structural design was preserved, but in 1925 his gargoyles and original ornamentation were removed, and the spire was recovered with tiles.", "title": "Author and theorist" }, { "paragraph_id": 32, "text": "His reputation had reached outside of France. The spire and roof of Strasbourg Cathedral had been damaged by German artillery during the Franco-Prussian War, and the city was now part of Germany. The German government invited Viollet-le-Duc to comment on their plans for the restoration, which involved a more grandiose Romanesque tower. Viollet-le-Duc informed the German architect that the planned new tower was completely out of character with the original facade and style of the cathedral. His advice was accepted, and the church was restored to its original form.", "title": "Author and theorist" }, { "paragraph_id": 33, "text": "In 1872 Viollet-le-Duc was engaged in the reconstruction of the Château d'Amboise, owned by the descendants of the former King, Louis-Philippe. The chateau had been confiscated by Napoleon III in 1848 but was returned to the family in 1872. It was a massive project to turn it into a residence, involving at times three hundred workers. Viollet-le-Duc designed all the work to the finest details, including the floor tiles, the gas lights in the salons, the ovens in the kitchen, and the electric bells for summoning servants.", "title": "Author and theorist" }, { "paragraph_id": 34, "text": "In 1874 Viollet-le-Duc resigned as diocesan architect of Paris and was succeeded by his contemporary, Paul Abadie. In his final years, he continued to supervise the restoration projects that were underway for the Commission of Historical Monuments. He engaged in polemics about architecture in the press, and was elected to the Paris municipal council.", "title": "Author and theorist" }, { "paragraph_id": 35, "text": "While planning the design and construction of the Statue of Liberty (Liberty Enlightening the World) sculptor Frédéric Auguste Bartholdi interested Viollet-le-Duc, his friend and mentor, in the project. As chief engineer, Viollet-le-Duc designed a brick pier within the statue, to which the skin would be anchored. After consultations with the metalwork foundry Gaget, Gauthier & Co., Viollet-le-Duc chose the metal which would be used for the skin, copper sheets, and the method used to shape it, repoussé, in which the sheets were heated and then struck with wooden hammers. An advantage of this choice was that the entire statue would be light for its volume, as the copper need be only 0.094 inches (2.4 mm) thick.", "title": "Statue of Liberty" }, { "paragraph_id": 36, "text": "He became engaged in the planning and construction of the Paris Universal Exposition of 1878. He proposed to the Minister of Education, Jules Ferry, that the Trocadéro Palace, the main building of the Exposition on the hilltop of Chaillot, be transformed after the Exposition into a museum of French monuments, displaying models of architecture and sculpture from landmarks around France. This idea was accepted. The National Museum of French Monuments opened in 1882, after his death. The Palais was reconstructed into the Palais de Chaillot in 1937, but the Museum of French Monuments was preserved and can be seen there today.", "title": "National Museum of French Monuments and final years" }, { "paragraph_id": 37, "text": "In his final years his son Eugène-Louis became the head of the Commission of Historic Monuments. He took on just one new project, the restoration of the cloister of the Augustines at Toulouse. He completed his series of dictionaries of architectural periods, designed for a general audience. He also devoted more time to studying the geography of the Alps around Mont-Blanc. He spent his summers hiking in the mountains and writing articles about his travels. He launched a public campaign for the re-forestation of the Alps, and published a detailed map of the area in 1876. He spent more and more time at La Vedette, the villa he constructed in Lausanne, a house on the model of a Savoyard chalet, but with a minimum of decoration, illustrating his new doctrine of form following function. He made one last visit to inspect Carcassonne, whose work was now under his son's direction. After an exhausting summer of hiking in the Alps in 1879, he became ill and died in Lausanne on 17 September 1879. He was buried in the cemetery of La Sallaz in Lausanne. In 1946 his grave and monument were transferred to the Cemetery of Bois-le-Vaux (Section XVIII) in Lausanne.", "title": "National Museum of French Monuments and final years" }, { "paragraph_id": 38, "text": "Viollet-le-Duc married Elisabeth Tempier in Paris on 3 May 1834. The couple had two children, but separated a few years after marriage, and spent little time together; he was continually on the road. The writer Geneviève Viollet-le-Duc (winner of the prix Broquette-Gonin in 1978) was his great-granddaughter.", "title": "Family" }, { "paragraph_id": 39, "text": "Viollet-le-Duc famously defined restoration in volume eight of his Dictionnaire raisonné de l'architecture française du XI au XVI siecle of 1858: \"To restore a building is not to maintain it, repair it or remake it: it is to re-establish it in a complete state which may never have existed at any given moment.\" He then explained that it had to meet four conditions: (1) The \"re-establishment\" had to be scientifically documented with plans and photographs and archeological records, which would guarantee exactness. (2) The restoration had to involve not just the appearance of the monument, or the effect that it produced, but also its structure; it had to use the most efficient means to assure the long life of the building, including using more solid materials, used more wisely. (3) the restoration had to exclude any modification contrary to obvious evidence; but the structure could be adapted to conform to more modern or rational uses and practices, which meant alterations to the original plan; and (4) The restoration should preserve older modifications made to the building, with the exception of those which compromised its stability or its conservation, or those which gravely violated the value of its historical presence.", "title": "Doctrine" }, { "paragraph_id": 40, "text": "He drew conclusions from medieval architecture that he applied to modern architecture. He noted that it was sometimes necessary to employ an iron frame in restoration to avoid the danger of fires, as long as the new structure was not heavier than the original, and kept the original balance of forces found in medieval structures. \"The monuments of the Middle Ages were carefully calculated, and their organism is delicate. There is nothing in excess in their works, nothing useless. If you change one of the conditions of these organisms, you change all the others. Many people consider this a fault; for us, this is a quality which we too often neglect in our modern construction....Why should we build expensive walls two meters thick, if walls fifty centimeters thick [with reinforced supports], offer sufficient stability? In the structure of the Middle Ages, every portion of a work fulfilled a function and possessed an action.\"", "title": "Doctrine" }, { "paragraph_id": 41, "text": "During the entire career of Viollet-le-Duc, he was engaged in a dispute with the doctrines of the École des Beaux-Arts, the leading architectural school of France, which he refused to attend as a student, and where he taught briefly as a professor, before being pressured to depart. In 1846 he engaged in a fervent exchange in print with Quatremère de Quincy, the Perpetual Secretary of the French Academy, on the question, \"Is it suitable, in the 19th century, to build churches in the Gothic style?\" De Quincy and his followers denounced the Gothic style as incoherent, disorderly, unintelligent, decadent and without taste. Viollet-le-Duc responded, \"What we want, messieurs, is the return of an art which was born in our country....Leave to Rome what belongs to Rome, and to Athens what belongs to Athens. Rome didn't want our Gothic (and was perhaps the only one in Europe to reject it) and they were right, because when one has the good fortune to possess a national architecture, the best thing is to keep it.\"", "title": "Gothic vs. Beaux-Arts" }, { "paragraph_id": 42, "text": "\"If you study for a moment a church of the 13th century\", he wrote, \"you see that all of the construction is carried out according to an invariable system. All the forces and the weights are thrust out to the exterior, a disposition which gives the interior the greatest open space possible. The flying buttresses and contreforts alone support the entire structure, and always have an aspect of resistance, of force and stability which reassures the eye and the spirit; The vaults, built with materials that are easy to mount and to place at a great height, are combined in a easy that places the totality of their weight on the piles; that the most simple means are always employed...and that all the parts of these constructions, independent of each other, even as they rely on each other, present an elasticity and a lightness needed in a building of such great dimensions. We can still see (and this is only found in Gothic architecture) that human proportions are the one fixed rule.\"", "title": "Gothic vs. Beaux-Arts" }, { "paragraph_id": 43, "text": "Viollet-le-Duc was often accused by certain critics, in his own time and later, of pursuing the spirit of the Gothic style in some of his restorations instead of strict historical accuracy. Many art historians also consider that the British architectural writer John Ruskin and William Morris were ferocious opponents of Viollet le Duc’s restorations. But Ruskin never criticised Viollet le Duc’s restoration work in itself, but criticised the principal of restoration itself. Indeed at the beginning of his career Ruskin had a very radical opinion on restoration: \"a building should be looked after and if not it should be left to die\". Viollet le Duc's position on the subject was more nuanced: \"if a building has not been upkept it should be restored\".", "title": "Controversy" }, { "paragraph_id": 44, "text": "The existence of an opposition between Ruskin and Viollet le Duc on restoration is today questioned by new research based on Ruskin's own writtings: \"there is no book on architecture which has everything correct apart from Viollet le Duc’s Dictionnary\". And at the end of his life Ruskin expressed the regret that \"no one in England had done the work that Viollet le Duc had done in France\".", "title": "Controversy" }, { "paragraph_id": 45, "text": "Viollet-le-Duc's restorations sometimes involved non-historical additions, either to assure the stability of the building, or sometimes simply to maintain the harmony of the design. The flèche or spire of Notre-Dame de Paris, which had been constructed in about 1250, was removed in 1786 after it was damaged by the wind. Viollet-le-Duc designed and constructed a new spire, ornamented with statuary, which was taller than the original and modified to resist the weather, but in harmony with the rest of the design. In the 19th and 20th century, his flèche was a target for critics.", "title": "Controversy" }, { "paragraph_id": 46, "text": "He was also criticized later for his modifications of the choir of Notre-Dame, which had been rebuilt in the Louis XIV style during the reign of that king. Viollet-le-Duc took out the old choir, including the altar where Napoleon Bonaparte had been crowned Emperor and replaced them with a Gothic altar and decoration which he designed. When he modified the choir, he also constructed new bays with small Gothic rose windows modelled on those in the church of Chars, in the Oise Valley. Some historians condemned these restorations as non-historical invention. His defenders pointed out that Viollet-le-Duc did not make any decisions on the restoration of Notre-Dame by himself; all of his plans were approved by Prosper Mérimée, the Inspector of Historical Monuments, and by the Commission of historic monuments.", "title": "Controversy" }, { "paragraph_id": 47, "text": "He was criticized for the abundance of Gothic gargoyles, chimeras, fleurons, and pinnacles which he added to Notre-Dame Cathedral. These decorations had existed in the Middle Ages but had largely been removed during the reign of Louis XIV. The last original gargoyles had been taken down in 1813. He modelled the new gargoyles and monsters on examples from other cathedrals of the period.", "title": "Controversy" }, { "paragraph_id": 48, "text": "He was later criticized also for the stained glass windows he designed and had made for the chapels around the ground level of the cathedral, which feature intricate Gothic designs in grisaille, which allow more light into the church. The contemporary view of the controversy of his restoration is summarized on a descriptive panel near the altar of the cathedral: \"The great restoration, carried to fruition by Viollet-le-Duc following the death of Lassus, supplied new radiance to the cathedral – whatever reservations one might have about the choices that were made. The work of the nineteenth century is now as much a part of the architectural history of Notre-Dame as that undertaken in previous centuries.\"", "title": "Controversy" }, { "paragraph_id": 49, "text": "The restoration of ramparts of Carcassonne was also criticized in the 20th century. His critics pointed out that the pointed caps of the towers he constructed were more typical of northern France, not the region where Carcassonne was located, near the Spanish border. Similarly he added roofs of northern slate tiles rather than southern clay tiles, a choice that has been reversed in more recent restorations. His critics also claimed that Viollet-le-Duc sought a \"condition of completeness\" which never actually existed at any given time. The principal counter-argument made by Viollet-le-Duc's defenders was that, without his prompt restorations, many of the buildings that he restored would have been lost, and that he did the best that he could with the knowledge that was then available.", "title": "Controversy" }, { "paragraph_id": 50, "text": "Mortimer Wheeler's entry on English archaeologist Charles R Peers for the Dictionary of National Biography (1971) is worth quoting for its critique of Viollet-le-Duc: “he [Peers] laid down the principles which have governed architectural conservation in the United Kingdom and have served as a model in other parts of the world. His cardinal principle was to retain but not to restore the surviving remains of an ancient structure; and in this respect he departed emphatically from the tradition of Viollet-le-Duc and his successors in France and Italy, where exuberant restoration frequently obscured the evidence upon which it was based ...”", "title": "Controversy" }, { "paragraph_id": 51, "text": "Throughout his career Viollet-le-Duc made notes and drawings, not only for the buildings he was working on but also on Romanesque, Gothic and Renaissance buildings that were to be soon demolished. His notes were useful when preparing his published works. His study of medieval and Renaissance periods was not limited to architecture but extended also to such areas as furniture, clothing, musical instruments, armament, and geology.", "title": "Publications" }, { "paragraph_id": 52, "text": "His work was published, first in serial form, and then as full-scale books, as:", "title": "Publications" }, { "paragraph_id": 53, "text": "Viollet-le-Duc is considered by many to be the first theorist of modern architecture. Sir John Summerson wrote that \"there have been two supremely eminent theorists in the history of European architecture – Leon Battista Alberti and Eugène Viollet-le-Duc.\"", "title": "Architectural theory and new building projects" }, { "paragraph_id": 54, "text": "His architectural theory was largely based on finding the ideal forms for specific materials and using these forms to create buildings. His writings centered on the idea that materials should be used \"honestly\". He believed that the outward appearance of a building should reflect the rational construction of the building. In Entretiens sur l'architecture, Viollet-le-Duc praised the Greek temple for its rational representation of its construction. For him, \"Greek architecture served as a model for the correspondence of structure and appearance.\"", "title": "Architectural theory and new building projects" }, { "paragraph_id": 55, "text": "Another component in Viollet-le-Duc's theory was how the design of a building should start from its program and the plan, and end with its decorations. If this resulted in an asymmetrical exterior, so be it. He dismissed the symmetry of classicist buildings as vain, caring too much about appearances at the expense of practicality and convenience for the inhabitants of the house.", "title": "Architectural theory and new building projects" }, { "paragraph_id": 56, "text": "In several unbuilt projects for new buildings, Viollet-le-Duc applied the lessons he had derived from Gothic architecture, applying its rational structural systems to modern building materials such as cast iron. For inspiration, he also examined organic structures, such as leaves and animal skeletons. He was especially interested in the wings of bats, an influence represented by his Assembly Hall project.", "title": "Architectural theory and new building projects" }, { "paragraph_id": 57, "text": "Viollet-le-Duc's drawings of iron trusswork were innovative for the time. Many of his designs emphasizing iron would later influence the Art Nouveau movement, most noticeably in the work of Hector Guimard, Victor Horta, Antoni Gaudí and Hendrik Petrus Berlage. His writings inspired several American architects, including Frank Furness, John Wellborn Root, Louis Sullivan, and Frank Lloyd Wright.", "title": "Architectural theory and new building projects" }, { "paragraph_id": 58, "text": "Viollet-le-Duc had a second career in the military, primarily in the defense of Paris during the Franco-Prussian War (1870–71). He was so influenced by the conflict that during his later years he described the idealized defense of France by the analogy of the military history of Le Roche-Pont, an imaginary castle, in his work Histoire d'une Forteresse (Annals of a Fortress, twice translated into English). Accessible and well researched, it is partly fictional.", "title": "Military career and influence" }, { "paragraph_id": 59, "text": "Annals of a Fortress strongly influenced French military defensive thinking. Viollet-le-Duc's critique of the effect of artillery (applying his practical knowledge from the 1870–1871 war) is so complete that it accurately describes the principles applied to the defense of France until World War II. The physical results of his theories are present in the fortification of Verdun prior to World War I and the Maginot Line prior to World War II. His theories are also represented by the French military theory of \"Deliberate Advance\", which stresses that artillery and a strong system of fortresses in the rear of an army are essential.", "title": "Military career and influence" }, { "paragraph_id": 60, "text": "The English architect Benjamin Bucknall (1833–1895) was a devotee of Viollet-le-Duc and during 1874 to 1881 translated several of his publications into English to popularise his principles in Great Britain. The later works of the English designer and architect William Burges were greatly influenced by Viollet-le-Duc, most strongly in Burges's designs for his own home, The Tower House in London's Holland Park district, and Burges's designs for Castell Coch near Cardiff, Wales.", "title": "Legacy" }, { "paragraph_id": 61, "text": "An exhibition, Eugène Viollet-le-Duc 1814–1879 was presented in Paris in 1965, and there was a larger, centennial exhibition in 1980.", "title": "Legacy" }, { "paragraph_id": 62, "text": "Viollet-le-Duc was the subject of a Google Doodle on January 27, 2014.", "title": "Legacy" } ]
Eugène Emmanuel Viollet-le-Duc was a French architect and author, famous for his restoration of the most prominent medieval landmarks in France. His major restoration projects included Notre-Dame de Paris, the Basilica of Saint Denis, Mont Saint-Michel, Sainte-Chapelle, the medieval walls of the city of Carcassonne, and Roquetaillade castle in the Bordeaux region. His writings on decoration and on the relationship between form and function in architecture had a fundamental influence on a whole new generation of architects, including all the major Art Nouveau artists: Antoni Gaudí, Victor Horta, Hector Guimard, Henry Van de Velde, Henry Sauvage and the Ecole de Nancy, Paul Hankar, Otto Wagner, Eugene Grasset, Emile Gallé and Hendrik Petrus Berlage. He also influences the first modern architects, Frank Lloyd Wright, Mies van der Rohe, Auguste Perret, Louis Sullivan and Le Corbusier, who considered Viollet-le-Duc as the father of modern architecture: "The roots of modern architecture are to be found in Viollet-le-Duc". His writings also influenced John Ruskin, William Morris and the Arts and Crafts movement. And at the 1862 international exhibition in London the esthetic works of Burne-Jones, Rossetti, Philip Webb, William Morris, Simeon Solomon et Edward Poynter are directly influenced from drawings in Viollet-le-Duc's Dictionnary. The English architect William Burges admitted in his late life "We all cribbed on Viollet-le-Duc even though no one could read French."
2001-08-10T11:52:36Z
2023-12-06T02:27:17Z
[ "Template:ISBN", "Template:Cite news", "Template:Sfn", "Template:Cite book", "Template:Commons category-inline", "Template:Cite web", "Template:Webarchive", "Template:Statue of Liberty", "Template:In lang", "Template:Notre-Dame de Paris", "Template:Cite journal", "Template:Authority control (arts)", "Template:Distinguish", "Template:Snd", "Template:Cite EB1911", "Template:Short description", "Template:Infobox architect", "Template:IPA-fr", "Template:Convert", "Template:Clear", "Template:Reflist", "Template:Cite thesis", "Template:Wikisource inline", "Template:Dead link", "Template:Gutenberg author", "Template:Internet Archive author", "Template:Gutenberg book" ]
https://en.wikipedia.org/wiki/Eug%C3%A8ne_Viollet-le-Duc
9,659
Endocarditis
Endocarditis is an inflammation of the inner layer of the heart, the endocardium. It usually involves the heart valves. Other structures that may be involved include the interventricular septum, the chordae tendineae, the mural endocardium, or the surfaces of intracardiac devices. Endocarditis is characterized by lesions, known as vegetations, which is a mass of platelets, fibrin, microcolonies of microorganisms, and scant inflammatory cells. In the subacute form of infective endocarditis, the vegetation may also include a center of granulomatous tissue, which may fibrose or calcify. There are several ways to classify endocarditis. The simplest classification is based on cause: either infective or non-infective, depending on whether a microorganism is the source of the inflammation or not. Regardless, the diagnosis of endocarditis is based on clinical features, investigations such as an echocardiogram, and blood cultures demonstrating the presence of endocarditis-causing microorganisms. Signs and symptoms include fever, chills, sweating, malaise, weakness, anorexia, weight loss, splenomegaly, flu-like feeling, cardiac murmur, heart failure, petechia (red spots on the skin), Osler's nodes (subcutaneous nodules found on hands and feet), Janeway lesions (nodular lesions on palms and soles), and Roth's spots (retinal hemorrhages). Infective endocarditis is an infection of the inner surface of the heart, usually the valves. Symptoms may include fever, small areas of bleeding into the skin, heart murmur, feeling tired, and low red blood cells. Complications may include valvular insufficiency, heart failure, stroke, and kidney failure. The cause is typically a bacterial infection and less commonly a fungal infection. Risk factors include valvular heart disease including rheumatic disease, congenital heart disease, artificial valves, hemodialysis, intravenous drug use, and electronic pacemakers. The bacterial most commonly involved are streptococci or staphylococci. The diagnosis of infective endocarditis relies on the Duke criteria, which were originally described in 1994 and modified in 2000. Clinical features and microbiological examinations are the first steps to diagnose an infective endocarditis. The imaging is also crucial. Echocardiography is the cornerstone of imaging modality in the diagnosis of infective endocarditis. Alternative imaging modalities as computer tomography, magnetic resonance imaging, and positron emission tomography/computer tomography (PET/CT) with 2-[18F]fluorodeoxyglucose (FDG) are playing an increasing role in the diagnosis and management of infective endocarditis. The usefulness of antibiotics following dental procedures has changed over the time. PRevention is recommended in patients at high risk. Treatment is generally with intravenous antibiotics. The choice of antibiotics is based on the blood cultures. Occasionally heart surgery is required. Populations at high risk of infective endocarditis include patients with previous infective endocarditis, patients with surgical or transcatheter prosthetic valves or post-cardiac valve repair, and patients with untreated CHD and surgically corrected congenital heart diease. The number of people affected is about 5 per 100,000 per year. Rates, however, vary between regions of the world. Males are affected more often than females. The risk of death among those infected is about 25%. Without treatment it is almost universally fatal. Nonbacterial thrombotic endocarditis (NBTE) is most commonly found on previously undamaged valves. As opposed to infective endocarditis, the vegetations in NBTE are small, sterile, and tend to aggregate along the edges of the valve or the cusps. Also unlike infective endocarditis, NBTE does not cause an inflammation response from the body. NBTE usually occurs during a hypercoagulable state such as system-wide bacterial infection, or pregnancy, though it is also sometimes seen in patients with venous catheters. NBTE may also occur in patients with cancers, particularly mucinous adenocarcinoma where Trousseau syndrome can be encountered. Typically NBTE does not cause many problems on its own, but parts of the vegetations may break off and embolize to the heart or brain, or they may serve as a focus where bacteria can lodge, thus causing infective endocarditis. Another form of sterile endocarditis is termed Libman–Sacks endocarditis; this form occurs more often in patients with lupus erythematosus and is thought to be due to the deposition of immune complexes. Like NBTE, Libman-Sacks endocarditis involves small vegetations, while infective endocarditis is composed of large vegetations. These immune complexes precipitate an inflammation reaction, which helps to differentiate it from NBTE. Also unlike NBTE, Libman-Sacks endocarditis does not seem to have a preferred location of deposition and may form on the undersurfaces of the valves or even on the endocardium.
[ { "paragraph_id": 0, "text": "Endocarditis is an inflammation of the inner layer of the heart, the endocardium. It usually involves the heart valves. Other structures that may be involved include the interventricular septum, the chordae tendineae, the mural endocardium, or the surfaces of intracardiac devices. Endocarditis is characterized by lesions, known as vegetations, which is a mass of platelets, fibrin, microcolonies of microorganisms, and scant inflammatory cells. In the subacute form of infective endocarditis, the vegetation may also include a center of granulomatous tissue, which may fibrose or calcify.", "title": "" }, { "paragraph_id": 1, "text": "There are several ways to classify endocarditis. The simplest classification is based on cause: either infective or non-infective, depending on whether a microorganism is the source of the inflammation or not. Regardless, the diagnosis of endocarditis is based on clinical features, investigations such as an echocardiogram, and blood cultures demonstrating the presence of endocarditis-causing microorganisms.", "title": "" }, { "paragraph_id": 2, "text": "Signs and symptoms include fever, chills, sweating, malaise, weakness, anorexia, weight loss, splenomegaly, flu-like feeling, cardiac murmur, heart failure, petechia (red spots on the skin), Osler's nodes (subcutaneous nodules found on hands and feet), Janeway lesions (nodular lesions on palms and soles), and Roth's spots (retinal hemorrhages).", "title": "" }, { "paragraph_id": 3, "text": "Infective endocarditis is an infection of the inner surface of the heart, usually the valves. Symptoms may include fever, small areas of bleeding into the skin, heart murmur, feeling tired, and low red blood cells. Complications may include valvular insufficiency, heart failure, stroke, and kidney failure.", "title": "Infective endocarditis" }, { "paragraph_id": 4, "text": "The cause is typically a bacterial infection and less commonly a fungal infection. Risk factors include valvular heart disease including rheumatic disease, congenital heart disease, artificial valves, hemodialysis, intravenous drug use, and electronic pacemakers. The bacterial most commonly involved are streptococci or staphylococci.", "title": "Infective endocarditis" }, { "paragraph_id": 5, "text": "The diagnosis of infective endocarditis relies on the Duke criteria, which were originally described in 1994 and modified in 2000. Clinical features and microbiological examinations are the first steps to diagnose an infective endocarditis. The imaging is also crucial. Echocardiography is the cornerstone of imaging modality in the diagnosis of infective endocarditis. Alternative imaging modalities as computer tomography, magnetic resonance imaging, and positron emission tomography/computer tomography (PET/CT) with 2-[18F]fluorodeoxyglucose (FDG) are playing an increasing role in the diagnosis and management of infective endocarditis.", "title": "Infective endocarditis" }, { "paragraph_id": 6, "text": "The usefulness of antibiotics following dental procedures has changed over the time. PRevention is recommended in patients at high risk. Treatment is generally with intravenous antibiotics. The choice of antibiotics is based on the blood cultures. Occasionally heart surgery is required. Populations at high risk of infective endocarditis include patients with previous infective endocarditis, patients with surgical or transcatheter prosthetic valves or post-cardiac valve repair, and patients with untreated CHD and surgically corrected congenital heart diease.", "title": "Infective endocarditis" }, { "paragraph_id": 7, "text": "The number of people affected is about 5 per 100,000 per year. Rates, however, vary between regions of the world. Males are affected more often than females. The risk of death among those infected is about 25%. Without treatment it is almost universally fatal.", "title": "Infective endocarditis" }, { "paragraph_id": 8, "text": "Nonbacterial thrombotic endocarditis (NBTE) is most commonly found on previously undamaged valves. As opposed to infective endocarditis, the vegetations in NBTE are small, sterile, and tend to aggregate along the edges of the valve or the cusps. Also unlike infective endocarditis, NBTE does not cause an inflammation response from the body. NBTE usually occurs during a hypercoagulable state such as system-wide bacterial infection, or pregnancy, though it is also sometimes seen in patients with venous catheters. NBTE may also occur in patients with cancers, particularly mucinous adenocarcinoma where Trousseau syndrome can be encountered. Typically NBTE does not cause many problems on its own, but parts of the vegetations may break off and embolize to the heart or brain, or they may serve as a focus where bacteria can lodge, thus causing infective endocarditis.", "title": "Non-infective endocarditis" }, { "paragraph_id": 9, "text": "Another form of sterile endocarditis is termed Libman–Sacks endocarditis; this form occurs more often in patients with lupus erythematosus and is thought to be due to the deposition of immune complexes. Like NBTE, Libman-Sacks endocarditis involves small vegetations, while infective endocarditis is composed of large vegetations. These immune complexes precipitate an inflammation reaction, which helps to differentiate it from NBTE. Also unlike NBTE, Libman-Sacks endocarditis does not seem to have a preferred location of deposition and may form on the undersurfaces of the valves or even on the endocardium.", "title": "Non-infective endocarditis" } ]
Endocarditis is an inflammation of the inner layer of the heart, the endocardium. It usually involves the heart valves. Other structures that may be involved include the interventricular septum, the chordae tendineae, the mural endocardium, or the surfaces of intracardiac devices. Endocarditis is characterized by lesions, known as vegetations, which is a mass of platelets, fibrin, microcolonies of microorganisms, and scant inflammatory cells. In the subacute form of infective endocarditis, the vegetation may also include a center of granulomatous tissue, which may fibrose or calcify. There are several ways to classify endocarditis. The simplest classification is based on cause: either infective or non-infective, depending on whether a microorganism is the source of the inflammation or not. Regardless, the diagnosis of endocarditis is based on clinical features, investigations such as an echocardiogram, and blood cultures demonstrating the presence of endocarditis-causing microorganisms. Signs and symptoms include fever, chills, sweating, malaise, weakness, anorexia, weight loss, splenomegaly, flu-like feeling, cardiac murmur, heart failure, petechia, Osler's nodes, Janeway lesions, and Roth's spots.
2002-02-11T21:26:39Z
2023-12-14T04:38:58Z
[ "Template:Cite journal", "Template:Authority control", "Template:Infobox medical condition (new)", "Template:Reflist", "Template:Cite web", "Template:Curlie", "Template:Medical resources", "Template:Circulatory system pathology", "Template:Main", "Template:Cite book", "Template:Scholia" ]
https://en.wikipedia.org/wiki/Endocarditis
9,660
Euler's sum of powers conjecture
In number theory, Euler's conjecture is a disproved conjecture related to Fermat's Last Theorem. It was proposed by Leonhard Euler in 1769. It states that for all integers n and k greater than 1, if the sum of n many kth powers of positive integers is itself a kth power, then n is greater than or equal to k: The conjecture represents an attempt to generalize Fermat's Last Theorem, which is the special case n = 2: if a 1 k + a 2 k = b k , {\displaystyle a_{1}^{k}+a_{2}^{k}=b^{k},} then 2 ≥ k. Although the conjecture holds for the case k = 3 (which follows from Fermat's Last Theorem for the third powers), it was disproved for k = 4 and k = 5. It is unknown whether the conjecture fails or holds for any value k ≥ 6. Euler was aware of the equality 59 + 158 = 133 + 134 involving sums of four fourth powers; this, however, is not a counterexample because no term is isolated on one side of the equation. He also provided a complete solution to the four cubes problem as in Plato's number 3 + 4 + 5 = 6 or the taxicab number 1729. The general solution of the equation x 1 3 + x 2 3 = x 3 3 + x 4 3 {\displaystyle x_{1}^{3}+x_{2}^{3}=x_{3}^{3}+x_{4}^{3}} is where a and b are any integers. Euler's conjecture was disproven by L. J. Lander and T. R. Parkin in 1966 when, through a direct computer search on a CDC 6600, they found a counterexample for k = 5. This was published in a paper comprising just two sentences. A total of three primitive (that is, in which the summands do not all have a common factor) counterexamples are known: (Lander & Parkin, 1966); (Scher & Seidl, 1996); (Frye, 2004). In 1988, Noam Elkies published a method to construct an infinite sequence of counterexamples for the k = 4 case. His smallest counterexample was A particular case of Elkies' solutions can be reduced to the identity where This is an elliptic curve with a rational point at v1 = −31/467. From this initial rational point, one can compute an infinite collection of others. Substituting v1 into the identity and removing common factors gives the numerical example cited above. In 1988, Roger Frye found the smallest possible counterexample for k = 4 by a direct computer search using techniques suggested by Elkies. This solution is the only one with values of the variables below 1,000,000. In 1967, L. J. Lander, T. R. Parkin, and John Selfridge conjectured that if where ai ≠ bj are positive integers for all 1 ≤ i ≤ n and 1 ≤ j ≤ m, then m + n ≥ k. In the special case m = 1, the conjecture states that if (under the conditions given above) then n ≥ k − 1. The special case may be described as the problem of giving a partition of a perfect power into few like powers. For k = 4, 5, 7, 8 and n = k or k − 1, there are many known solutions. Some of these are listed below. See OEIS: A347773 for more data. (R. Frye, 1988); (R. Norrie, smallest, 1911). (Lander & Parkin, 1966); (Lander, Parkin, Selfridge, smallest, 1967); (Lander, Parkin, Selfridge, second smallest, 1967); (Sastry, 1934, third smallest). As of 2002, there are no solutions for k = 6 whose final term is ≤ 730000. (M. Dodrill, 1999). (S. Chase, 2000).
[ { "paragraph_id": 0, "text": "In number theory, Euler's conjecture is a disproved conjecture related to Fermat's Last Theorem. It was proposed by Leonhard Euler in 1769. It states that for all integers n and k greater than 1, if the sum of n many kth powers of positive integers is itself a kth power, then n is greater than or equal to k:", "title": "" }, { "paragraph_id": 1, "text": "The conjecture represents an attempt to generalize Fermat's Last Theorem, which is the special case n = 2: if a 1 k + a 2 k = b k , {\\displaystyle a_{1}^{k}+a_{2}^{k}=b^{k},} then 2 ≥ k.", "title": "" }, { "paragraph_id": 2, "text": "Although the conjecture holds for the case k = 3 (which follows from Fermat's Last Theorem for the third powers), it was disproved for k = 4 and k = 5. It is unknown whether the conjecture fails or holds for any value k ≥ 6.", "title": "" }, { "paragraph_id": 3, "text": "Euler was aware of the equality 59 + 158 = 133 + 134 involving sums of four fourth powers; this, however, is not a counterexample because no term is isolated on one side of the equation. He also provided a complete solution to the four cubes problem as in Plato's number 3 + 4 + 5 = 6 or the taxicab number 1729. The general solution of the equation x 1 3 + x 2 3 = x 3 3 + x 4 3 {\\displaystyle x_{1}^{3}+x_{2}^{3}=x_{3}^{3}+x_{4}^{3}} is", "title": "Background" }, { "paragraph_id": 4, "text": "where a and b are any integers.", "title": "Background" }, { "paragraph_id": 5, "text": "Euler's conjecture was disproven by L. J. Lander and T. R. Parkin in 1966 when, through a direct computer search on a CDC 6600, they found a counterexample for k = 5. This was published in a paper comprising just two sentences. A total of three primitive (that is, in which the summands do not all have a common factor) counterexamples are known:", "title": "Counterexamples" }, { "paragraph_id": 6, "text": "(Lander & Parkin, 1966); (Scher & Seidl, 1996); (Frye, 2004).", "title": "Counterexamples" }, { "paragraph_id": 7, "text": "In 1988, Noam Elkies published a method to construct an infinite sequence of counterexamples for the k = 4 case. His smallest counterexample was", "title": "Counterexamples" }, { "paragraph_id": 8, "text": "A particular case of Elkies' solutions can be reduced to the identity", "title": "Counterexamples" }, { "paragraph_id": 9, "text": "where", "title": "Counterexamples" }, { "paragraph_id": 10, "text": "This is an elliptic curve with a rational point at v1 = −31/467. From this initial rational point, one can compute an infinite collection of others. Substituting v1 into the identity and removing common factors gives the numerical example cited above.", "title": "Counterexamples" }, { "paragraph_id": 11, "text": "In 1988, Roger Frye found the smallest possible counterexample", "title": "Counterexamples" }, { "paragraph_id": 12, "text": "for k = 4 by a direct computer search using techniques suggested by Elkies. This solution is the only one with values of the variables below 1,000,000.", "title": "Counterexamples" }, { "paragraph_id": 13, "text": "In 1967, L. J. Lander, T. R. Parkin, and John Selfridge conjectured that if", "title": "Generalizations" }, { "paragraph_id": 14, "text": "where ai ≠ bj are positive integers for all 1 ≤ i ≤ n and 1 ≤ j ≤ m, then m + n ≥ k. In the special case m = 1, the conjecture states that if", "title": "Generalizations" }, { "paragraph_id": 15, "text": "(under the conditions given above) then n ≥ k − 1.", "title": "Generalizations" }, { "paragraph_id": 16, "text": "The special case may be described as the problem of giving a partition of a perfect power into few like powers. For k = 4, 5, 7, 8 and n = k or k − 1, there are many known solutions. Some of these are listed below.", "title": "Generalizations" }, { "paragraph_id": 17, "text": "See OEIS: A347773 for more data.", "title": "Generalizations" }, { "paragraph_id": 18, "text": "(R. Frye, 1988); (R. Norrie, smallest, 1911).", "title": "Generalizations" }, { "paragraph_id": 19, "text": "(Lander & Parkin, 1966); (Lander, Parkin, Selfridge, smallest, 1967); (Lander, Parkin, Selfridge, second smallest, 1967); (Sastry, 1934, third smallest).", "title": "Generalizations" }, { "paragraph_id": 20, "text": "As of 2002, there are no solutions for k = 6 whose final term is ≤ 730000.", "title": "Generalizations" }, { "paragraph_id": 21, "text": "(M. Dodrill, 1999).", "title": "Generalizations" }, { "paragraph_id": 22, "text": "(S. Chase, 2000).", "title": "Generalizations" } ]
In number theory, Euler's conjecture is a disproved conjecture related to Fermat's Last Theorem. It was proposed by Leonhard Euler in 1769. It states that for all integers n and k greater than 1, if the sum of n many kth powers of positive integers is itself a kth power, then n is greater than or equal to k: The conjecture represents an attempt to generalize Fermat's Last Theorem, which is the special case n = 2: if a 1 k + a 2 k = b k , then 2 ≥ k. Although the conjecture holds for the case k = 3, it was disproved for k = 4 and k = 5. It is unknown whether the conjecture fails or holds for any value k ≥ 6.
2002-02-25T15:51:15Z
2023-10-03T21:06:16Z
[ "Template:Main article", "Template:Cite web", "Template:MathWorld", "Template:Disproved conjectures", "Template:Cite book", "Template:Citation", "Template:Mvar", "Template:Math", "Template:Cite journal", "Template:Cbignore", "Template:Short description", "Template:Nowrap", "Template:OEIS2C", "Template:Reflist", "Template:Leonhard Euler" ]
https://en.wikipedia.org/wiki/Euler%27s_sum_of_powers_conjecture
9,662
Book of Exodus
The Book of Exodus (from Ancient Greek: Ἔξοδος, romanized: Éxodos; Biblical Hebrew: שְׁמוֹת Šəmōṯ, 'Names'; Latin: Liber Exodus) is the second religious book of the Bible. It is a narrative of the Exodus, the origin myth of the Israelites leaving slavery in Biblical Egypt through the strength of their deity named Yahweh, who according to the story chose them as his people. The Israelites then journey with the legendary prophet Moses to Mount Sinai, where Yahweh gives the 10 commandments and they enter into a covenant with Yahweh, who promises to make them a "holy nation, and a kingdom of priests" on condition of their faithfulness. He gives them their laws and instructions to build the Tabernacle, the means by which he will come from heaven and dwell with them and lead them in a holy war to conquer Canaan (the "Promised Land"), which has earlier, according to the myth of Genesis, been promised to the "seed" of Abraham, the legendary patriarch of the Israelites. Traditionally ascribed to Moses himself, modern scholars see its initial composition as a product of the Babylonian exile (6th century BCE), based on earlier written sources and oral traditions, with final revisions in the Persian post-exilic period (5th century BCE). American biblical scholar Carol Meyers, in her commentary on Exodus, suggests that it is arguably the most important book in the Bible, as it presents the defining features of Israel's identity—memories of a past marked by hardship and escape, a binding covenant with their God, who chooses Israel, and the establishment of the life of the community and the guidelines for sustaining it. The consensus among modern scholars is that the story in the Book of Exodus is best understood as a myth. The English name Exodus comes from the Ancient Greek: ἔξοδος, romanized: éxodos, lit. 'way out', from ἐξ-, ex-, 'out' and ὁδός, hodós, 'path', 'road'. In Hebrew the book's title is שְׁמוֹת, shemōt, "Names", from the beginning words of the text: "These are the names of the sons of Israel" (Hebrew: וְאֵלֶּה שְׁמֹות בְּנֵי יִשְׂרָאֵל). Most mainstream scholars do not accept the biblical Exodus account as historical for a number of reasons. It is generally agreed that the Exodus stories were written centuries after the apparent setting of the stories. Archaeologists Israel Finkelstein and Neil Asher Silberman argue that archaeology has not found evidence for even a small band of wandering Israelites living in the Sinai: "The conclusion – that Exodus did not happen at the time and in the manner described in the Bible – seems irrefutable [...] repeated excavations and surveys throughout the entire area have not provided even the slightest evidence". Instead, they argue how modern archaeology suggests continuity between Canaanite and Israelite settlements, indicating a heavily Canaanite origin for Israel, with little suggestion that a group of foreigners from Egypt comprised early Israel. However, a majority of scholars believe that the story has some historical basis, though disagreeing widely about what that historical kernel might have been. Kenton Sparks refers to it as "mythologized history". Some scholars such as Benjamin J. Noonan have pointed out that the presence of Egyptian cognates in the Exodus and wilderness traditions "entered Hebrew during the Late Bronze Age, precisely when we would expect them to have been borrowed if the events of these narratives really occurred", challenging the assumption of a post-exilic tradition. Furthermore, in direct response to popular claims that the Exodus "wandering period" lacks evidence in the Sinai region, various anthropologists of Near Eastern history have noted that a lack of material culture from the Israelites in the Book of Exodus is actually expected given what is known about historical and present semi-nomadic peoples. There is no unanimous agreement among scholars on the structure of Exodus. One strong possibility is that it is a diptych (i.e., divided into two parts), with the division between parts 1 and 2 at the crossing of the Red Sea or at the beginning of the theophany (appearance of God) in chapter 19. On this plan, the first part tells of God's rescue of his people from Egypt and their journey under his care to Sinai (chapters 1–19) and the second tells of the covenant between them (chapters 20–40). The text of the Book of Exodus begins after the events at the end of the Book of Genesis where Jacob's sons and their families joined their brother Joseph in Egypt, which Joseph had saved from famine. It is 400 years later and Egypt's new Pharaoh, who does not remember Joseph, is fearful that the enslaved and now numerous Israelites could become a fifth column. He hardens their labor and orders the killing of all newborn boys. A Levite woman named Jochebed saves her baby by setting him adrift on the Nile in an ark of bulrushes. Pharaoh's daughter finds the child, names him Moses, and brings him up as her own. Later, a grown Moses goes out to see his kinsmen. He witnesses the abuse of a Hebrew slave by an Egyptian overseer. Angered, Moses kills him and flees into Midian to escape punishment. There, he marries Zipporah, daughter of Jethro, a Midianite priest. While tending Jethro's flock, Moses encounters God in a burning bush. Moses asks God for his name, to which God replies with three words, often translated as "I Am that I Am." This is the book's explanation for the origin of the name Yahweh, as God is thereafter known. God tells Moses to return to Egypt, free the Hebrews from slavery and lead them into Canaan, the land promised to the seed of Abraham in Genesis. On the journey back to Egypt, God seeks to kill Moses. Zipporah circumcises their son and the attack stops. (See Zipporah at the inn.) Moses reunites with his brother Aaron and, returning to Egypt, convenes the Israelite elders, preparing them to go into the wilderness to worship God. Pharaoh refuses to release the Israelites from their work for the festival, and so God curses the Egyptians with ten terrible plagues, such as a river of blood, an outbreak of frogs, and the thick darkness. Moses is commanded by God to fix the spring month of Aviv at the head of the Hebrew calendar. The Israelites are to take a lamb on the 10th day of the month, sacrifice the lamb on the 14th day, daub its blood on their mezuzot—doorposts and lintels, and to observe the Passover meal that night, during the full moon. The 10th plague comes that night, causing the death of all Egyptian firstborn sons, prompting Pharaoh to expel the Israelites. Regretting his decision, Pharaoh commands his chariot army after the Israelites, who appear trapped at the Red Sea. God parts the sea, allowing the Israelites to pass through, before drowning Pharaoh's pursuing forces. As desert life proves arduous, the Israelites complain and long for Egypt, but God miraculously provides manna for them to eat and water to drink. The Israelites arrive at the mountain of God, where Moses's father-in-law Jethro visits Moses; at his suggestion, Moses appoints judges over Israel. God asks whether they will agree to be his people – They accept. The people gather at the foot of the mountain, and with thunder and lightning, fire and clouds of smoke, the sound of trumpets, and the trembling of the mountain, God appears on the peak, and the people see the cloud and hear the voice (or possibly sound) of God. God tells Moses to ascend the mountain. God pronounces the Ten Commandments (the Ethical Decalogue) in the hearing of all Israel. Moses goes up the mountain into the presence of God, who pronounces the Covenant Code of ritual and civil law and promises Canaan to them if they obey. Moses comes down from the mountain and writes down God's words, and the people agree to keep them. God calls Moses up the mountain again, where he remains for forty days and forty nights, after which he returns, bearing the set of stone tablets. God gives Moses instructions for the construction of the tabernacle so that God may dwell permanently among his chosen people, along with instructions for the priestly vestments, the altar and its appurtenances, procedures for the ordination of priests, and the daily sacrifice offerings. Aaron becomes the first hereditary high priest. God gives Moses the two tablets of stone containing the words of the ten commandments, written with the "finger of God". While Moses is with God, Aaron casts a golden calf, which the people worship. God informs Moses of their apostasy and threatens to kill them all, but relents when Moses pleads for them. Moses comes down from the mountain, smashes the stone tablets in anger, and commands the Levites to massacre the unfaithful Israelites. God commands Moses to construct two new tablets. Moses ascends the mountain again, where God dictates the Ten Commandments for Moses to write on the tablets. Moses descends from the mountain with a transformed face; from that time onwards he must hide his face with a veil. Moses assembles the Hebrews and repeats to them the commandments he has received from God, which are to keep the Sabbath and to construct the Tabernacle. The Israelites do as they are commanded. From that time God dwells in the Tabernacle and orders the travels of the Hebrews. Jewish and Christian tradition viewed Moses as the author of Exodus and the entire Torah, but by the end of the 19th century the increasing awareness of discrepancies, inconsistencies, repetitions and other features of the Pentateuch had led scholars to abandon this idea. In approximate round dates, the process which produced Exodus and the Pentateuch probably began around 600 BCE when existing oral and written traditions were brought together to form books recognizable as those we know, reaching their final form as unchangeable sacred texts around 400 BCE. Although patent mythical elements are not so prominent in Exodus as in Genesis, ancient legends may have an influence on the book's form or content: for example, the story of the infant Moses's salvation from the Nile is argued to be based on an earlier legend of king Sargon of Akkad, while the story of the parting of the Red Sea may trade on Mesopotamian creation mythology. Similarly, the Covenant Code (the law code in Exodus 20:22–23:33) has some similarities in both content and structure with the Laws of Hammurabi. These potential influences serve to reinforce the conclusion that the Book of Exodus originated in the exiled Jewish community of 6th-century BCE Babylon, but not all the potential sources are Mesopotamian: the story of Moses's flight to Midian following the murder of the Egyptian overseer may draw on the Egyptian Story of Sinuhe. Biblical scholars describe the Bible's theologically-motivated history writing as "salvation history", meaning a history of God's saving actions that give identity to Israel – the promise of offspring and land to the ancestors, the Exodus from Egypt (in which God saves Israel from slavery), the wilderness wandering, the revelation at Sinai, and the hope for the future life in the promised land. A theophany is a manifestation (appearance) of a god – in the Bible, an appearance of the God of Israel, accompanied by storms – the earth trembles, the mountains quake, the heavens pour rain, thunder peals and lightning flashes. The theophany in Exodus begins "the third day" from their arrival at Sinai in chapter 19: Yahweh and the people meet at the mountain, God appears in the storm and converses with Moses, giving him the Ten Commandments while the people listen. The theophany is therefore a public experience of divine law. The second half of Exodus marks the point at which, and describes the process through which, God's theophany becomes a permanent presence for Israel via the Tabernacle. That so much of the book (chapters 25–31, 35–40) describes the plans of the Tabernacle demonstrates the importance it played in the perception of Second Temple Judaism at the time of the text's redaction by the Priestly writers: the Tabernacle is the place where God is physically present, where, through the priesthood, Israel could be in direct, literal communion with him. The heart of Exodus is the Sinaitic covenant. A covenant is a legal document binding two parties to take on certain obligations towards each other. There are several covenants in the Bible, and in each case they exhibit at least some of the elements in real-life treaties of the ancient Middle East: a preamble, historical prologue, stipulations, deposition and reading, list of witnesses, blessings and curses, and ratification by animal sacrifice. Biblical covenants, in contrast to Eastern covenants in general, are between a god, Yahweh, and a people, Israel, instead of between a strong ruler and a weaker vassal. God elects Israel for salvation because the "sons of Israel" are "the firstborn son" of the God of Israel, descended through Shem and Abraham to the chosen line of Jacob whose name is changed to Israel. The goal of the divine plan in Exodus is a return to humanity's state in Eden, so that God can dwell with the Israelites as he had with Adam and Eve through the Ark and Tabernacle, which together form a model of the universe; in later Abrahamic religions Israel becomes the guardian of God's plan for humanity, to bring "God's creation blessing to mankind" begun in Adam. List of Torah portions in the Book of Exodus:
[ { "paragraph_id": 0, "text": "The Book of Exodus (from Ancient Greek: Ἔξοδος, romanized: Éxodos; Biblical Hebrew: שְׁמוֹת Šəmōṯ, 'Names'; Latin: Liber Exodus) is the second religious book of the Bible. It is a narrative of the Exodus, the origin myth of the Israelites leaving slavery in Biblical Egypt through the strength of their deity named Yahweh, who according to the story chose them as his people. The Israelites then journey with the legendary prophet Moses to Mount Sinai, where Yahweh gives the 10 commandments and they enter into a covenant with Yahweh, who promises to make them a \"holy nation, and a kingdom of priests\" on condition of their faithfulness. He gives them their laws and instructions to build the Tabernacle, the means by which he will come from heaven and dwell with them and lead them in a holy war to conquer Canaan (the \"Promised Land\"), which has earlier, according to the myth of Genesis, been promised to the \"seed\" of Abraham, the legendary patriarch of the Israelites.", "title": "" }, { "paragraph_id": 1, "text": "Traditionally ascribed to Moses himself, modern scholars see its initial composition as a product of the Babylonian exile (6th century BCE), based on earlier written sources and oral traditions, with final revisions in the Persian post-exilic period (5th century BCE). American biblical scholar Carol Meyers, in her commentary on Exodus, suggests that it is arguably the most important book in the Bible, as it presents the defining features of Israel's identity—memories of a past marked by hardship and escape, a binding covenant with their God, who chooses Israel, and the establishment of the life of the community and the guidelines for sustaining it. The consensus among modern scholars is that the story in the Book of Exodus is best understood as a myth.", "title": "" }, { "paragraph_id": 2, "text": "The English name Exodus comes from the Ancient Greek: ἔξοδος, romanized: éxodos, lit. 'way out', from ἐξ-, ex-, 'out' and ὁδός, hodós, 'path', 'road'. In Hebrew the book's title is שְׁמוֹת, shemōt, \"Names\", from the beginning words of the text: \"These are the names of the sons of Israel\" (Hebrew: וְאֵלֶּה שְׁמֹות בְּנֵי יִשְׂרָאֵל).", "title": "Title" }, { "paragraph_id": 3, "text": "Most mainstream scholars do not accept the biblical Exodus account as historical for a number of reasons. It is generally agreed that the Exodus stories were written centuries after the apparent setting of the stories. Archaeologists Israel Finkelstein and Neil Asher Silberman argue that archaeology has not found evidence for even a small band of wandering Israelites living in the Sinai: \"The conclusion – that Exodus did not happen at the time and in the manner described in the Bible – seems irrefutable [...] repeated excavations and surveys throughout the entire area have not provided even the slightest evidence\". Instead, they argue how modern archaeology suggests continuity between Canaanite and Israelite settlements, indicating a heavily Canaanite origin for Israel, with little suggestion that a group of foreigners from Egypt comprised early Israel.", "title": "Historicity" }, { "paragraph_id": 4, "text": "However, a majority of scholars believe that the story has some historical basis, though disagreeing widely about what that historical kernel might have been. Kenton Sparks refers to it as \"mythologized history\". Some scholars such as Benjamin J. Noonan have pointed out that the presence of Egyptian cognates in the Exodus and wilderness traditions \"entered Hebrew during the Late Bronze Age, precisely when we would expect them to have been borrowed if the events of these narratives really occurred\", challenging the assumption of a post-exilic tradition. Furthermore, in direct response to popular claims that the Exodus \"wandering period\" lacks evidence in the Sinai region, various anthropologists of Near Eastern history have noted that a lack of material culture from the Israelites in the Book of Exodus is actually expected given what is known about historical and present semi-nomadic peoples.", "title": "Historicity" }, { "paragraph_id": 5, "text": "There is no unanimous agreement among scholars on the structure of Exodus. One strong possibility is that it is a diptych (i.e., divided into two parts), with the division between parts 1 and 2 at the crossing of the Red Sea or at the beginning of the theophany (appearance of God) in chapter 19. On this plan, the first part tells of God's rescue of his people from Egypt and their journey under his care to Sinai (chapters 1–19) and the second tells of the covenant between them (chapters 20–40).", "title": "Structure" }, { "paragraph_id": 6, "text": "The text of the Book of Exodus begins after the events at the end of the Book of Genesis where Jacob's sons and their families joined their brother Joseph in Egypt, which Joseph had saved from famine. It is 400 years later and Egypt's new Pharaoh, who does not remember Joseph, is fearful that the enslaved and now numerous Israelites could become a fifth column. He hardens their labor and orders the killing of all newborn boys. A Levite woman named Jochebed saves her baby by setting him adrift on the Nile in an ark of bulrushes. Pharaoh's daughter finds the child, names him Moses, and brings him up as her own.", "title": "Summary" }, { "paragraph_id": 7, "text": "Later, a grown Moses goes out to see his kinsmen. He witnesses the abuse of a Hebrew slave by an Egyptian overseer. Angered, Moses kills him and flees into Midian to escape punishment. There, he marries Zipporah, daughter of Jethro, a Midianite priest. While tending Jethro's flock, Moses encounters God in a burning bush. Moses asks God for his name, to which God replies with three words, often translated as \"I Am that I Am.\" This is the book's explanation for the origin of the name Yahweh, as God is thereafter known. God tells Moses to return to Egypt, free the Hebrews from slavery and lead them into Canaan, the land promised to the seed of Abraham in Genesis. On the journey back to Egypt, God seeks to kill Moses. Zipporah circumcises their son and the attack stops. (See Zipporah at the inn.)", "title": "Summary" }, { "paragraph_id": 8, "text": "Moses reunites with his brother Aaron and, returning to Egypt, convenes the Israelite elders, preparing them to go into the wilderness to worship God. Pharaoh refuses to release the Israelites from their work for the festival, and so God curses the Egyptians with ten terrible plagues, such as a river of blood, an outbreak of frogs, and the thick darkness. Moses is commanded by God to fix the spring month of Aviv at the head of the Hebrew calendar. The Israelites are to take a lamb on the 10th day of the month, sacrifice the lamb on the 14th day, daub its blood on their mezuzot—doorposts and lintels, and to observe the Passover meal that night, during the full moon. The 10th plague comes that night, causing the death of all Egyptian firstborn sons, prompting Pharaoh to expel the Israelites. Regretting his decision, Pharaoh commands his chariot army after the Israelites, who appear trapped at the Red Sea. God parts the sea, allowing the Israelites to pass through, before drowning Pharaoh's pursuing forces.", "title": "Summary" }, { "paragraph_id": 9, "text": "As desert life proves arduous, the Israelites complain and long for Egypt, but God miraculously provides manna for them to eat and water to drink. The Israelites arrive at the mountain of God, where Moses's father-in-law Jethro visits Moses; at his suggestion, Moses appoints judges over Israel. God asks whether they will agree to be his people – They accept. The people gather at the foot of the mountain, and with thunder and lightning, fire and clouds of smoke, the sound of trumpets, and the trembling of the mountain, God appears on the peak, and the people see the cloud and hear the voice (or possibly sound) of God. God tells Moses to ascend the mountain. God pronounces the Ten Commandments (the Ethical Decalogue) in the hearing of all Israel. Moses goes up the mountain into the presence of God, who pronounces the Covenant Code of ritual and civil law and promises Canaan to them if they obey. Moses comes down from the mountain and writes down God's words, and the people agree to keep them. God calls Moses up the mountain again, where he remains for forty days and forty nights, after which he returns, bearing the set of stone tablets.", "title": "Summary" }, { "paragraph_id": 10, "text": "God gives Moses instructions for the construction of the tabernacle so that God may dwell permanently among his chosen people, along with instructions for the priestly vestments, the altar and its appurtenances, procedures for the ordination of priests, and the daily sacrifice offerings. Aaron becomes the first hereditary high priest. God gives Moses the two tablets of stone containing the words of the ten commandments, written with the \"finger of God\".", "title": "Summary" }, { "paragraph_id": 11, "text": "While Moses is with God, Aaron casts a golden calf, which the people worship. God informs Moses of their apostasy and threatens to kill them all, but relents when Moses pleads for them. Moses comes down from the mountain, smashes the stone tablets in anger, and commands the Levites to massacre the unfaithful Israelites. God commands Moses to construct two new tablets. Moses ascends the mountain again, where God dictates the Ten Commandments for Moses to write on the tablets.", "title": "Summary" }, { "paragraph_id": 12, "text": "Moses descends from the mountain with a transformed face; from that time onwards he must hide his face with a veil. Moses assembles the Hebrews and repeats to them the commandments he has received from God, which are to keep the Sabbath and to construct the Tabernacle. The Israelites do as they are commanded. From that time God dwells in the Tabernacle and orders the travels of the Hebrews.", "title": "Summary" }, { "paragraph_id": 13, "text": "Jewish and Christian tradition viewed Moses as the author of Exodus and the entire Torah, but by the end of the 19th century the increasing awareness of discrepancies, inconsistencies, repetitions and other features of the Pentateuch had led scholars to abandon this idea. In approximate round dates, the process which produced Exodus and the Pentateuch probably began around 600 BCE when existing oral and written traditions were brought together to form books recognizable as those we know, reaching their final form as unchangeable sacred texts around 400 BCE.", "title": "Composition" }, { "paragraph_id": 14, "text": "Although patent mythical elements are not so prominent in Exodus as in Genesis, ancient legends may have an influence on the book's form or content: for example, the story of the infant Moses's salvation from the Nile is argued to be based on an earlier legend of king Sargon of Akkad, while the story of the parting of the Red Sea may trade on Mesopotamian creation mythology. Similarly, the Covenant Code (the law code in Exodus 20:22–23:33) has some similarities in both content and structure with the Laws of Hammurabi. These potential influences serve to reinforce the conclusion that the Book of Exodus originated in the exiled Jewish community of 6th-century BCE Babylon, but not all the potential sources are Mesopotamian: the story of Moses's flight to Midian following the murder of the Egyptian overseer may draw on the Egyptian Story of Sinuhe.", "title": "Composition" }, { "paragraph_id": 15, "text": "Biblical scholars describe the Bible's theologically-motivated history writing as \"salvation history\", meaning a history of God's saving actions that give identity to Israel – the promise of offspring and land to the ancestors, the Exodus from Egypt (in which God saves Israel from slavery), the wilderness wandering, the revelation at Sinai, and the hope for the future life in the promised land.", "title": "Themes" }, { "paragraph_id": 16, "text": "A theophany is a manifestation (appearance) of a god – in the Bible, an appearance of the God of Israel, accompanied by storms – the earth trembles, the mountains quake, the heavens pour rain, thunder peals and lightning flashes. The theophany in Exodus begins \"the third day\" from their arrival at Sinai in chapter 19: Yahweh and the people meet at the mountain, God appears in the storm and converses with Moses, giving him the Ten Commandments while the people listen. The theophany is therefore a public experience of divine law.", "title": "Themes" }, { "paragraph_id": 17, "text": "The second half of Exodus marks the point at which, and describes the process through which, God's theophany becomes a permanent presence for Israel via the Tabernacle. That so much of the book (chapters 25–31, 35–40) describes the plans of the Tabernacle demonstrates the importance it played in the perception of Second Temple Judaism at the time of the text's redaction by the Priestly writers: the Tabernacle is the place where God is physically present, where, through the priesthood, Israel could be in direct, literal communion with him.", "title": "Themes" }, { "paragraph_id": 18, "text": "The heart of Exodus is the Sinaitic covenant. A covenant is a legal document binding two parties to take on certain obligations towards each other. There are several covenants in the Bible, and in each case they exhibit at least some of the elements in real-life treaties of the ancient Middle East: a preamble, historical prologue, stipulations, deposition and reading, list of witnesses, blessings and curses, and ratification by animal sacrifice. Biblical covenants, in contrast to Eastern covenants in general, are between a god, Yahweh, and a people, Israel, instead of between a strong ruler and a weaker vassal.", "title": "Themes" }, { "paragraph_id": 19, "text": "God elects Israel for salvation because the \"sons of Israel\" are \"the firstborn son\" of the God of Israel, descended through Shem and Abraham to the chosen line of Jacob whose name is changed to Israel. The goal of the divine plan in Exodus is a return to humanity's state in Eden, so that God can dwell with the Israelites as he had with Adam and Eve through the Ark and Tabernacle, which together form a model of the universe; in later Abrahamic religions Israel becomes the guardian of God's plan for humanity, to bring \"God's creation blessing to mankind\" begun in Adam.", "title": "Themes" }, { "paragraph_id": 20, "text": "List of Torah portions in the Book of Exodus:", "title": "Judaism's weekly Torah portions in the Book of Exodus" } ]
The Book of Exodus is the second religious book of the Bible. It is a narrative of the Exodus, the origin myth of the Israelites leaving slavery in Biblical Egypt through the strength of their deity named Yahweh, who according to the story chose them as his people. The Israelites then journey with the legendary prophet Moses to Mount Sinai, where Yahweh gives the 10 commandments and they enter into a covenant with Yahweh, who promises to make them a "holy nation, and a kingdom of priests" on condition of their faithfulness. He gives them their laws and instructions to build the Tabernacle, the means by which he will come from heaven and dwell with them and lead them in a holy war to conquer Canaan, which has earlier, according to the myth of Genesis, been promised to the "seed" of Abraham, the legendary patriarch of the Israelites. Traditionally ascribed to Moses himself, modern scholars see its initial composition as a product of the Babylonian exile, based on earlier written sources and oral traditions, with final revisions in the Persian post-exilic period. American biblical scholar Carol Meyers, in her commentary on Exodus, suggests that it is arguably the most important book in the Bible, as it presents the defining features of Israel's identity—memories of a past marked by hardship and escape, a binding covenant with their God, who chooses Israel, and the establishment of the life of the community and the guidelines for sustaining it. The consensus among modern scholars is that the story in the Book of Exodus is best understood as a myth.
2001-09-20T00:19:56Z
2023-12-30T07:49:38Z
[ "Template:Lang-he", "Template:One source section", "Template:Lang-la", "Template:Refend", "Template:S-bef", "Template:Lang-hbo", "Template:Librivox book", "Template:S-hou", "Template:ISBN", "Template:S-end", "Template:Ten Commandments", "Template:Redirect", "Template:Multiple image", "Template:Refbegin", "Template:Commons category", "Template:Webarchive", "Template:S-start", "Template:Books of the Bible", "Template:Authority control", "Template:About", "Template:Lang-grc", "Template:Sfn", "Template:Main", "Template:Reflist", "Template:Wikisource", "Template:S-aft", "Template:Short description", "Template:Tanakh OT", "Template:Portal", "Template:Bibleref2", "Template:Cite book", "Template:Wikiquote", "Template:S-ttl", "Template:Book of Exodus" ]
https://en.wikipedia.org/wiki/Book_of_Exodus
9,663
Electronics
Electronics is a scientific and engineering discipline that studies and applies the principles of physics to design, create, and operate devices that manipulate electrons and other electrically charged particles. Electronics is a subfield of electrical engineering, but it differs from it in that it focuses on using active devices such as transistors, diodes, and integrated circuits to control and amplify the flow of electric current and to convert it from one form to another, such as from alternating current (AC) to direct current (DC) or from analog to digital. Electronics also encompasses the fields of microelectronics, nanoelectronics, optoelectronics, and quantum electronics, which deal with the fabrication and application of electronic devices at microscopic, nanoscopic, optical, and quantum scales. Electronics have a profound impact on various aspects of modern society and culture, such as communication, entertainment, education, health care, industry, and security. The main driving force behind the advancement of electronics is the semiconductor industry, which produces the basic materials and components for electronic devices and circuits. The semiconductor industry is one of the largest and most profitable sectors in the global economy, with annual revenues exceeding $481 billion in 2018. The electronics industry also encompasses other sectors that rely on electronic devices and systems, such as e-commerce, which generated over $29 trillion in online sales in 2017. Electronics has hugely influenced the development of modern society. The identification of the electron in 1897, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages such as radio signals from a radio antenna possible with a non-mechanical device. Vacuum tubes (thermionic valves) were the first active electronic components which controlled current flow by influencing the flow of individual electrons, They were responsible for the electronics revolution of the first half of the twentieth century, They enabled the construction of equipment that used current amplification and rectification to give us radio, television, radar, long-distance telephony and much more. The early growth of electronics was rapid, and by the 1920s, commercial radio broadcasting and communications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry. The next big technological step took several decades to appear, when the first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. However, vacuum tubes played a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s. Since then, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode-ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices. In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic and peripherals. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications. The MOSFET (MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Its advantages include high scalability, affordability, low power consumption, and high density. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment. As the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block (monolith) of semiconductor material. The circuits could be made smaller, and the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the late 1960s, followed by VLSI. In 2008, billion-transistor processors became commercially available. An electronic component is any component in an electronic system either active or passive. Components are connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function. Components may be packaged singly, or in more complex groups as integrated circuits. Passive electronic components are capacitors, inductors, resistors, whilst active components are such as semiconductor devices; transistors and thyristors, which control current flow at electron level. Electronic circuit functions can be divided into two function groups: analog and digital. A particular device may consist of circuitry that has either or a mix of the two types. Analog circuits are becoming less common, as many of their functions are being digitized. Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage or current as opposed to discrete levels as in digital circuits. The number of different analog circuits so far devised is huge, especially because a 'circuit' can be defined as anything from a single component, to systems containing thousands of components. Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators. One rarely finds modern circuits that are entirely analog – these days analog circuitry may use digital or even microprocessor techniques to improve performance. This type of circuit is usually called "mixed signal" rather than analog or digital. Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation. An example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output. In fact, many digital circuits are actually implemented as variations of analog circuits similar to this example – after all, all aspects of the real physical world are essentially analog, so digital effects are only realized by constraining analog behaviour. Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra and are the basis of all digital computers. To most engineers, the terms "digital circuit", "digital system" and "logic" are interchangeable in the context of digital circuits. Most digital circuits use a binary system with two voltage levels labelled "0" and "1". Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High". However, some systems use the reverse definition ("0" is "High") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as they see fit to facilitate their design. The definition of the levels as "0" or "1" is arbitrary. Ternary (with three states) logic has been studied, and some prototype computers made. Mass-produced binary systems have caused lower significance for using ternary logic. Computers, electronic clocks, and programmable logic controllers (used to control industrial processes) are constructed of digital circuits. Digital signal processors, which measure, filter or compress continuous real-world analog signals, are another example. Transistors such as MOSFET are used to control binary states. Highly integrated devices: Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user. Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice. Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (i.e. semiconductor devices, such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others. Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy. Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties. Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to go to European markets. Electrical components are generally mounted in the following ways: The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over $481 billion as of 2018. The largest industry sector is e-commerce, which generated over $29 trillion in 2017. The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13 sextillion MOSFETs having been manufactured between 1960 and 2018. In the 1960s, U.S. manufacturers were unable to compete with Japanese companies such as Sony and Hitachi who could produce high-quality goods at lower prices. By the 1980s, however, U.S. manufacturers became the world leaders in semiconductor development and assembly. However, during the 1990s and subsequently, the industry shifted overwhelmingly to East Asia (a process begun with the initial movement of microchip mass-production there in the 1970s), as plentiful, cheap labor, and increasing technological sophistication, became widely available there. Over three decades, the United States' global share of semiconductor manufacturing capacity fell, from 37% in 1990, to 12% in 2022. America's pre-eminent semiconductor manufacturer, Intel Corporation, fell far behind its subcontractor Taiwan Semiconductor Manufacturing Company (TSMC) in manufacturing technology. By that time, Taiwan had become the world's leading source of advanced semiconductors—followed by South Korea, the United States, Japan, Singapore, and China. Important semiconductor industry facilities (which often are subsidiaries of a leading producer based elsewhere) also exist in Europe (notably the Netherlands), Southeast Asia, South America, and Israel.
[ { "paragraph_id": 0, "text": "Electronics is a scientific and engineering discipline that studies and applies the principles of physics to design, create, and operate devices that manipulate electrons and other electrically charged particles. Electronics is a subfield of electrical engineering, but it differs from it in that it focuses on using active devices such as transistors, diodes, and integrated circuits to control and amplify the flow of electric current and to convert it from one form to another, such as from alternating current (AC) to direct current (DC) or from analog to digital. Electronics also encompasses the fields of microelectronics, nanoelectronics, optoelectronics, and quantum electronics, which deal with the fabrication and application of electronic devices at microscopic, nanoscopic, optical, and quantum scales.", "title": "" }, { "paragraph_id": 1, "text": "Electronics have a profound impact on various aspects of modern society and culture, such as communication, entertainment, education, health care, industry, and security. The main driving force behind the advancement of electronics is the semiconductor industry, which produces the basic materials and components for electronic devices and circuits. The semiconductor industry is one of the largest and most profitable sectors in the global economy, with annual revenues exceeding $481 billion in 2018. The electronics industry also encompasses other sectors that rely on electronic devices and systems, such as e-commerce, which generated over $29 trillion in online sales in 2017.", "title": "" }, { "paragraph_id": 2, "text": "Electronics has hugely influenced the development of modern society. The identification of the electron in 1897, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages such as radio signals from a radio antenna possible with a non-mechanical device.", "title": "History and development" }, { "paragraph_id": 3, "text": "Vacuum tubes (thermionic valves) were the first active electronic components which controlled current flow by influencing the flow of individual electrons, They were responsible for the electronics revolution of the first half of the twentieth century, They enabled the construction of equipment that used current amplification and rectification to give us radio, television, radar, long-distance telephony and much more. The early growth of electronics was rapid, and by the 1920s, commercial radio broadcasting and communications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry.", "title": "History and development" }, { "paragraph_id": 4, "text": "The next big technological step took several decades to appear, when the first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. However, vacuum tubes played a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s. Since then, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode-ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices.", "title": "History and development" }, { "paragraph_id": 5, "text": "In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic and peripherals. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.", "title": "History and development" }, { "paragraph_id": 6, "text": "The MOSFET (MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Its advantages include high scalability, affordability, low power consumption, and high density. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment.", "title": "History and development" }, { "paragraph_id": 7, "text": "As the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block (monolith) of semiconductor material. The circuits could be made smaller, and the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the late 1960s, followed by VLSI. In 2008, billion-transistor processors became commercially available.", "title": "History and development" }, { "paragraph_id": 8, "text": "An electronic component is any component in an electronic system either active or passive. Components are connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function. Components may be packaged singly, or in more complex groups as integrated circuits. Passive electronic components are capacitors, inductors, resistors, whilst active components are such as semiconductor devices; transistors and thyristors, which control current flow at electron level.", "title": "Devices and components" }, { "paragraph_id": 9, "text": "Electronic circuit functions can be divided into two function groups: analog and digital. A particular device may consist of circuitry that has either or a mix of the two types. Analog circuits are becoming less common, as many of their functions are being digitized.", "title": "Types of circuits" }, { "paragraph_id": 10, "text": "Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage or current as opposed to discrete levels as in digital circuits.", "title": "Types of circuits" }, { "paragraph_id": 11, "text": "The number of different analog circuits so far devised is huge, especially because a 'circuit' can be defined as anything from a single component, to systems containing thousands of components.", "title": "Types of circuits" }, { "paragraph_id": 12, "text": "Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators.", "title": "Types of circuits" }, { "paragraph_id": 13, "text": "One rarely finds modern circuits that are entirely analog – these days analog circuitry may use digital or even microprocessor techniques to improve performance. This type of circuit is usually called \"mixed signal\" rather than analog or digital.", "title": "Types of circuits" }, { "paragraph_id": 14, "text": "Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation. An example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output. In fact, many digital circuits are actually implemented as variations of analog circuits similar to this example – after all, all aspects of the real physical world are essentially analog, so digital effects are only realized by constraining analog behaviour.", "title": "Types of circuits" }, { "paragraph_id": 15, "text": "Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra and are the basis of all digital computers. To most engineers, the terms \"digital circuit\", \"digital system\" and \"logic\" are interchangeable in the context of digital circuits. Most digital circuits use a binary system with two voltage levels labelled \"0\" and \"1\". Often logic \"0\" will be a lower voltage and referred to as \"Low\" while logic \"1\" is referred to as \"High\". However, some systems use the reverse definition (\"0\" is \"High\") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as they see fit to facilitate their design. The definition of the levels as \"0\" or \"1\" is arbitrary.", "title": "Types of circuits" }, { "paragraph_id": 16, "text": "Ternary (with three states) logic has been studied, and some prototype computers made. Mass-produced binary systems have caused lower significance for using ternary logic. Computers, electronic clocks, and programmable logic controllers (used to control industrial processes) are constructed of digital circuits. Digital signal processors, which measure, filter or compress continuous real-world analog signals, are another example. Transistors such as MOSFET are used to control binary states.", "title": "Types of circuits" }, { "paragraph_id": 17, "text": "Highly integrated devices:", "title": "Types of circuits" }, { "paragraph_id": 18, "text": "Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user.", "title": "Design" }, { "paragraph_id": 19, "text": "Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice.", "title": "Design" }, { "paragraph_id": 20, "text": "Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (i.e. semiconductor devices, such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others.", "title": "Design" }, { "paragraph_id": 21, "text": "Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy.", "title": "Negative qualities" }, { "paragraph_id": 22, "text": "Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties.", "title": "Negative qualities" }, { "paragraph_id": 23, "text": "Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to go to European markets.", "title": "Packaging methods" }, { "paragraph_id": 24, "text": "Electrical components are generally mounted in the following ways:", "title": "Packaging methods" }, { "paragraph_id": 25, "text": "The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over $481 billion as of 2018. The largest industry sector is e-commerce, which generated over $29 trillion in 2017. The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13 sextillion MOSFETs having been manufactured between 1960 and 2018. In the 1960s, U.S. manufacturers were unable to compete with Japanese companies such as Sony and Hitachi who could produce high-quality goods at lower prices. By the 1980s, however, U.S. manufacturers became the world leaders in semiconductor development and assembly.", "title": "Industry" }, { "paragraph_id": 26, "text": "However, during the 1990s and subsequently, the industry shifted overwhelmingly to East Asia (a process begun with the initial movement of microchip mass-production there in the 1970s), as plentiful, cheap labor, and increasing technological sophistication, became widely available there.", "title": "Industry" }, { "paragraph_id": 27, "text": "Over three decades, the United States' global share of semiconductor manufacturing capacity fell, from 37% in 1990, to 12% in 2022. America's pre-eminent semiconductor manufacturer, Intel Corporation, fell far behind its subcontractor Taiwan Semiconductor Manufacturing Company (TSMC) in manufacturing technology.", "title": "Industry" }, { "paragraph_id": 28, "text": "By that time, Taiwan had become the world's leading source of advanced semiconductors—followed by South Korea, the United States, Japan, Singapore, and China.", "title": "Industry" }, { "paragraph_id": 29, "text": "Important semiconductor industry facilities (which often are subsidiaries of a leading producer based elsewhere) also exist in Europe (notably the Netherlands), Southeast Asia, South America, and Israel.", "title": "Industry" }, { "paragraph_id": 30, "text": "", "title": "External links" } ]
Electronics is a scientific and engineering discipline that studies and applies the principles of physics to design, create, and operate devices that manipulate electrons and other electrically charged particles. Electronics is a subfield of electrical engineering, but it differs from it in that it focuses on using active devices such as transistors, diodes, and integrated circuits to control and amplify the flow of electric current and to convert it from one form to another, such as from alternating current (AC) to direct current (DC) or from analog to digital. Electronics also encompasses the fields of microelectronics, nanoelectronics, optoelectronics, and quantum electronics, which deal with the fabrication and application of electronic devices at microscopic, nanoscopic, optical, and quantum scales. Electronics have a profound impact on various aspects of modern society and culture, such as communication, entertainment, education, health care, industry, and security. The main driving force behind the advancement of electronics is the semiconductor industry, which produces the basic materials and components for electronic devices and circuits. The semiconductor industry is one of the largest and most profitable sectors in the global economy, with annual revenues exceeding $481 billion in 2018. The electronics industry also encompasses other sectors that rely on electronic devices and systems, such as e-commerce, which generated over $29 trillion in online sales in 2017.
2001-11-02T10:40:53Z
2023-12-29T08:04:14Z
[ "Template:Short description", "Template:Webarchive", "Template:Electronic systems", "Template:Technology topics", "Template:Main", "Template:Cite book", "Template:Reflist", "Template:Cite journal", "Template:Authority control", "Template:About", "Template:US$", "Template:Cbignore", "Template:ISBN", "Template:Commons category", "Template:Machines", "Template:Div col", "Template:Div col end", "Template:Clarify", "Template:Nbsp", "Template:Cite web", "Template:Wikibooks", "Template:See also", "Template:Wikisource", "Template:Curlie", "Template:Portal bar", "Template:Use dmy dates", "Template:Further", "Template:Portal", "Template:Cite news", "Template:Wikiversity" ]
https://en.wikipedia.org/wiki/Electronics